Re-structure Model Optimizer User Guide and Clean-up (#10801)
* Modified the workflow diagram * Moved supported topology lists to separate topics * Additional changes * Removed Supported Topologies list and Deprecated pages * Created the Model Conversion Tutorials section for instructions for specific models * Topic names alignment, removed Default_Model_Optimizer_Optimizations.md * Additional structural changes * Fixed links * heading fixes
This commit is contained in:
parent
0c20e7a3ca
commit
0f8c599ce7
@ -8,13 +8,18 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_IR_and_opsets
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
|
||||
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
|
||||
openvino_docs_MO_DG_Known_Issues_Limitations
|
||||
openvino_docs_MO_DG_Default_Model_Optimizer_Optimizations
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@ -634,21 +639,28 @@ Before running the Model Optimizer, you must install the Model Optimizer pre-req
|
||||
To convert the model to the Intermediate Representation (IR), run Model Optimizer:
|
||||
|
||||
```sh
|
||||
mo --input_model INPUT_MODEL --output_dir <OUTPUT_MODEL_DIR>
|
||||
mo --input_model INPUT_MODEL
|
||||
```
|
||||
|
||||
You need to have write permissions for an output directory.
|
||||
> **NOTE**: Some models require using additional arguments, such as `--input`, `--input_shape`, and `--output`. To learn about when you need to use these parameters, refer to [General Conversion Parameters](prepare_model/convert_model/Converting_Model.md).
|
||||
|
||||
> **NOTE**: Some models require using additional arguments to specify conversion parameters, such as `--input_shape`, `--scale`, `--scale_values`, `--mean_values`, `--mean_file`. To learn about when you need to use these parameters, refer to [Converting a Model to Intermediate Representation (IR)](prepare_model/convert_model/Converting_Model.md).
|
||||
|
||||
To adjust the conversion process, you may use general parameters defined in the [Converting a Model to Intermediate Representation (IR)](prepare_model/convert_model/Converting_Model.md) and
|
||||
framework-specific parameters for:
|
||||
* [Caffe](prepare_model/convert_model/Convert_Model_From_Caffe.md)
|
||||
To adjust the conversion process, you may use [general parameters](prepare_model/convert_model/Converting_Model.md) and framework-specific parameters for:
|
||||
* [TensorFlow](prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
|
||||
* [MXNet](prepare_model/convert_model/Convert_Model_From_MxNet.md)
|
||||
* [ONNX](prepare_model/convert_model/Convert_Model_From_ONNX.md)
|
||||
* [PyTorch](prepare_model/convert_model/Convert_Model_From_PyTorch.md)
|
||||
* [PaddlePaddle](prepare_model/convert_model/Convert_Model_From_Paddle.md)
|
||||
* [MXNet](prepare_model/convert_model/Convert_Model_From_MxNet.md)
|
||||
* [Caffe](prepare_model/convert_model/Convert_Model_From_Caffe.md)
|
||||
* [Kaldi](prepare_model/convert_model/Convert_Model_From_Kaldi.md)
|
||||
|
||||
## Examples of CLI Commands
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model:
|
||||
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel
|
||||
```
|
||||
|
||||
## Videos
|
||||
|
||||
@sphinxdirective
|
||||
|
@ -1,9 +0,0 @@
|
||||
# Known Issues and Limitations in the Model Optimizer {#openvino_docs_MO_DG_Known_Issues_Limitations}
|
||||
|
||||
## Model Optimizer for TensorFlow* should be run on Intel® hardware that supports the AVX instruction set
|
||||
|
||||
TensorFlow* provides only prebuilt binaries with AVX instructions enabled. When you're configuring the Model Optimizer by running the `install_prerequisites` or `install_prerequisites_tf` scripts, they download only those ones, which are not supported on hardware such as Intel® Pentium® processor N4200/5, N3350/5, N3450/5 (formerly known as Apollo Lake).
|
||||
|
||||
To run the Model Optimizer on this hardware, you should compile TensorFlow binaries from source as described at the [TensorFlow website](https://www.tensorflow.org/install/source).
|
||||
|
||||
Another option is to run the Model Optimizer to generate an IR on hardware that supports AVX to and then perform inference on hardware without AVX.
|
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
@ -1,4 +1,4 @@
|
||||
# Optimize Preprocessing Computation{#openvino_docs_MO_DG_Additional_Optimization_Use_Cases}
|
||||
# Optimize Preprocessing Computation {#openvino_docs_MO_DG_Additional_Optimization_Use_Cases}
|
||||
|
||||
Model Optimizer performs preprocessing to a model. It is possible to optimize this step and improve first inference time, to do that, follow the tips bellow:
|
||||
|
||||
@ -13,3 +13,23 @@ Model Optimizer performs preprocessing to a model. It is possible to optimize th
|
||||
|
||||
- **Resulting IR precision**<br>
|
||||
The resulting IR precision, for instance, `FP16` or `FP32`, directly affects performance. As CPU now supports `FP16` (while internally upscaling to `FP32` anyway) and because this is the best precision for a GPU target, you may want to always convert models to `FP16`. Notice that this is the only precision that Intel® Movidius™ Myriad™ 2 and Intel® Myriad™ X VPUs support.
|
||||
|
||||
## When to Specify Mean and Scale Values
|
||||
Usually neural network models are trained with the normalized input data. This means that the input data values are converted to be in a specific range, for example, `[0, 1]` or `[-1, 1]`. Sometimes the mean values (mean images) are subtracted from the input data values as part of the pre-processing. There are two cases how the input data pre-processing is implemented.
|
||||
* The input pre-processing operations are a part of a topology. In this case, the application that uses the framework to infer the topology does not pre-process the input.
|
||||
* The input pre-processing operations are not a part of a topology and the pre-processing is performed within the application which feeds the model with an input data.
|
||||
|
||||
In the first case, the Model Optimizer generates the IR with required pre-processing operations and OpenVINO Samples may be used to infer the model.
|
||||
|
||||
In the second case, information about mean/scale values should be provided to the Model Optimizer to embed it to the generated IR. Model Optimizer provides a number of command line parameters to specify them: `--mean`, `--scale`, `--scale_values`, `--mean_values`.
|
||||
|
||||
> **NOTE:** If both mean and scale values are specified, the mean is subtracted first and then scale is applied regardless of the order of options in command line. Input values are *divided* by the scale value(s). If also `--reverse_input_channels` option is used, the reverse_input_channels will be applied first, then mean and after that scale.
|
||||
|
||||
There is no a universal recipe for determining the mean/scale values for a particular model. The steps below could help to determine them:
|
||||
* Read the model documentation. Usually the documentation describes mean/scale value if the pre-processing is required.
|
||||
* Open the example script/application executing the model and track how the input data is read and passed to the framework.
|
||||
* Open the model in a visualization tool and check for layers performing subtraction or multiplication (like `Sub`, `Mul`, `ScaleShift`, `Eltwise` etc) of the input data. If such layers exist, pre-processing is probably part of the model.
|
||||
|
||||
## When to Reverse Input Channels <a name="when_to_reverse_input_channels"></a>
|
||||
Input data for your application can be of RGB or BRG color input order. For example, OpenVINO Samples load input images in the BGR channels order. However, the model may be trained on images loaded with the opposite order (for example, most TensorFlow\* models are trained with images in RGB order). In this case, inference results using the OpenVINO samples may be incorrect. The solution is to provide `--reverse_input_channels` command line parameter. Taking this parameter, the Model Optimizer performs first convolution or other channel dependent operation weights modification so these operations output will be like the image is passed with RGB channels order.
|
||||
|
||||
|
@ -1,11 +0,0 @@
|
||||
# Default Model Optimizer Optimizations {#openvino_docs_MO_DG_Default_Model_Optimizer_Optimizations}
|
||||
|
||||
Model Optimizer not only converts a model to IR format, but also performs a number of optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions. Generally, these layers should not be manifested in the resulting IR:
|
||||
|
||||

|
||||
|
||||
The picture above shows Caffe\* Resnet269\* topology. The left model is the original model, and the one on the right (after conversion) is the resulting model that the Model Optimizer produces, with BatchNorm and ScaleShift layers fused into the convolution weights rather than constituting separate layers.
|
||||
|
||||
If you still see these operations, inspect the Model Optimizer output carefully while searching for warnings, such as on the tool being unable to fuse. For example, non-linear operations (like activations) in between convolutions and linear operations might prevent the fusing. If performance is of concern, try to change (and potentially re-train) the topology. Refer to the [Model Optimizer Guide](Model_Optimization_Techniques.md) for more optimizations.
|
||||
|
||||
Notice that the activation (`_relu`) is not touched by the Model Optimizer, and while it can be merged into convolution as well, this is rather a device-specific optimization, covered by OpenVINO Runtime during the model loading time. You are encouraged to inspect performance counters from plugins that should indicate that these particular layers are not executed (“Optimized out”). For more information, refer to <a href="#performance-counters">Internal Inference Performance Counters</a>.
|
@ -252,7 +252,7 @@ Looks like you have provided only one shape for the placeholder, however there a
|
||||
|
||||
#### 33. What does the message "The amount of input nodes for port is not equal to 1" mean? <a name="question-33"></a>
|
||||
|
||||
This error occurs when the `SubgraphMatch.single_input_node` function is used for an input port that supplies more than one node in a sub-graph. The `single_input_node` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the `input_nodes` function or `node_by_pattern` function instead of `single_input_node`. Please, refer to [Sub-Graph Replacement in the Model Optimizer](customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md) for more details.
|
||||
This error occurs when the `SubgraphMatch.single_input_node` function is used for an input port that supplies more than one node in a sub-graph. The `single_input_node` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the `input_nodes` function or `node_by_pattern` function instead of `single_input_node`. Please, refer to **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) documentation for more details.
|
||||
|
||||
#### 34. What does the message "Output node for port has already been specified" mean? <a name="question-34"></a>
|
||||
|
||||
@ -447,7 +447,7 @@ This message may appear when the `--data_type=FP16` command line option is used.
|
||||
|
||||
#### 78. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean? <a name="question-78"></a>
|
||||
|
||||
This error occurs when the `SubgraphMatch.node_by_pattern` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to [Sub-graph Replacement in the Model Optimizer](customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
|
||||
This error occurs when the `SubgraphMatch.node_by_pattern` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) documentation.
|
||||
|
||||
#### 79. What does the message "The topology contains no "input" layers" mean? <a name="question-79"></a>
|
||||
|
||||
@ -585,7 +585,7 @@ For Tensorflow:
|
||||
* [Convert models, created with TensorFlow Object Detection API](convert_model/tf_specific/Convert_Object_Detection_API_Models.md)
|
||||
|
||||
For all frameworks:
|
||||
1. [Replace cycle containing Sub-graph in Model Optimizer](customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
|
||||
1. [Replace cycle containing Sub-graph in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md)
|
||||
2. [OpenVINO™ Extensibility Mechanism](../../Extensibility_UG/Intro.md)
|
||||
|
||||
or
|
||||
|
@ -1,11 +1,5 @@
|
||||
# Converting a Caffe* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _convert model caffe:
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
A summary of the steps for optimizing and deploying a model that was trained with Caffe\*:
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Caffe\*.
|
||||
@ -13,33 +7,6 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
3. Test the model in the Intermediate Representation format using the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in the target environment via provided [OpenVINO samples](../../../OV_Runtime_UG/Samples_Overview.md)
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in your application to deploy the model in the target environment
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
* **Classification models:**
|
||||
* AlexNet
|
||||
* VGG-16, VGG-19
|
||||
* SqueezeNet v1.0, SqueezeNet v1.1
|
||||
* ResNet-50, ResNet-101, Res-Net-152
|
||||
* Inception v1, Inception v2, Inception v3, Inception v4
|
||||
* CaffeNet
|
||||
* MobileNet
|
||||
* Squeeze-and-Excitation Networks: SE-BN-Inception, SE-Resnet-101, SE-ResNet-152, SE-ResNet-50, SE-ResNeXt-101, SE-ResNeXt-50
|
||||
* ShuffleNet v2
|
||||
|
||||
* **Object detection models:**
|
||||
* SSD300-VGG16, SSD500-VGG16
|
||||
* Faster-RCNN
|
||||
* RefineDet (MYRIAD plugin only)
|
||||
|
||||
* **Face detection models:**
|
||||
* VGG Face
|
||||
* SSH: Single Stage Headless Face Detector
|
||||
|
||||
* **Semantic segmentation models:**
|
||||
* FCN8
|
||||
|
||||
> **NOTE**: It is necessary to specify mean and scale values for most of the Caffe\* models to convert them with the Model Optimizer. The exact values should be determined separately for each model. For example, for Caffe\* models trained on ImageNet, the mean values usually are `123.68`, `116.779`, `103.939` for blue, green and red channels respectively. The scale value is usually `127.5`. Refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](Converting_Model.md) for the information on how to specify mean and scale values.
|
||||
|
||||
## Convert a Caffe* Model <a name="Convert_From_Caffe"></a>
|
||||
|
||||
To convert a Caffe\* model, run Model Optimizer with the path to the input model `.caffemodel` file and the path to an output directory with write permissions:
|
||||
@ -146,3 +113,6 @@ In this document, you learned:
|
||||
* Basic information about how the Model Optimizer works with Caffe\* models
|
||||
* Which Caffe\* models are supported
|
||||
* How to convert a trained Caffe\* model using the Model Optimizer with both framework-agnostic and Caffe-specific command-line options
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -1,17 +1,5 @@
|
||||
# Converting a Kaldi* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _convert model kaldi:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
A summary of the steps for optimizing and deploying a model that was trained with Kaldi\*:
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Kaldi\*.
|
||||
@ -21,26 +9,6 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
|
||||
> **NOTE**: The Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. Support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.
|
||||
|
||||
## Supported Topologies
|
||||
* Convolutional Neural Networks (CNN):
|
||||
* Wall Street Journal CNN (wsj_cnn4b)
|
||||
* Resource Management CNN (rm_cnn4a_smbr)
|
||||
|
||||
* Long Short Term Memory (LSTM) Networks:
|
||||
* Resource Management LSTM (rm_lstm4f)
|
||||
* TED-LIUM LSTM (ted_lstm4f)
|
||||
|
||||
* Deep Neural Networks (DNN):
|
||||
* Wall Street Journal DNN (wsj_dnn5b_smbr);
|
||||
* TED-LIUM DNN (ted_dnn_smbr)
|
||||
|
||||
* Time delay neural network (TDNN)
|
||||
* [ASpIRE Chain TDNN](kaldi_specific/Aspire_Tdnn_Model.md);
|
||||
* [Librispeech nnet3](https://github.com/ryanleary/kaldi-test/releases/download/v0.0/LibriSpeech-trained.tgz).
|
||||
|
||||
* TDNN-LSTM model
|
||||
|
||||
|
||||
## Convert a Kaldi* Model <a name="Convert_From_Kaldi"></a>
|
||||
|
||||
To convert a Kaldi\* model, run Model Optimizer with the path to the input model `.nnet` or `.mdl` file and to an output directory where you have write permissions:
|
||||
@ -116,6 +84,8 @@ The Model Optimizer outputs the mapping between inputs and outputs. For example:
|
||||
2. Copy output blobs from the mapping to the corresponding inputs. For example, data from `Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out`
|
||||
must be copied to `Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out`.
|
||||
|
||||
|
||||
## Supported Kaldi\* Layers
|
||||
Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -1,18 +1,5 @@
|
||||
# Converting an MXNet* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _convert model mxnet:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
A summary of the steps for optimizing and deploying a model that was trained with the MXNet\* framework:
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for MXNet* (MXNet was used to train your model)
|
||||
@ -20,40 +7,6 @@ A summary of the steps for optimizing and deploying a model that was trained wit
|
||||
3. Test the model in the Intermediate Representation format using the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in the target environment via provided [OpenVINO Samples](../../../OV_Runtime_UG/Samples_Overview.md)
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in your application to deploy the model in the target environment
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
> **NOTE**: SSD models from the table require converting to the deploy mode. For details, see the [Conversion Instructions](https://github.com/zhreshold/mxnet-ssd/#convert-model-to-deploy-mode) in the GitHub MXNet-SSD repository.
|
||||
|
||||
| Model Name| Model File |
|
||||
| ------------- |:-------------:|
|
||||
|VGG-16| [Repo](https://github.com/dmlc/mxnet-model-gallery/tree/master), [Symbol](http://data.mxnet.io/models/imagenet/vgg/vgg16-symbol.json), [Params](http://data.mxnet.io/models/imagenet/vgg/vgg16-0000.params)|
|
||||
|VGG-19| [Repo](https://github.com/dmlc/mxnet-model-gallery/tree/master), [Symbol](http://data.mxnet.io/models/imagenet/vgg/vgg19-symbol.json), [Params](http://data.mxnet.io/models/imagenet/vgg/vgg19-0000.params)|
|
||||
|ResNet-152 v1| [Repo](https://github.com/dmlc/mxnet-model-gallery/tree/master), [Symbol](http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-symbol.json), [Params](http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152-0000.params)|
|
||||
|SqueezeNet_v1.1| [Repo](https://github.com/dmlc/mxnet-model-gallery/tree/master), [Symbol](http://data.mxnet.io/models/imagenet/squeezenet/squeezenet_v1.1-symbol.json), [Params](http://data.mxnet.io/models/imagenet/squeezenet/squeezenet_v1.1-0000.params)|
|
||||
|Inception BN| [Repo](https://github.com/dmlc/mxnet-model-gallery/tree/master), [Symbol](http://data.mxnet.io/models/imagenet/inception-bn/Inception-BN-symbol.json), [Params](http://data.mxnet.io/models/imagenet/inception-bn/Inception-BN-0126.params)|
|
||||
|CaffeNet| [Repo](https://github.com/dmlc/mxnet-model-gallery/tree/master), [Symbol](http://data.mxnet.io/mxnet/models/imagenet/caffenet/caffenet-symbol.json), [Params](http://data.mxnet.io/models/imagenet/caffenet/caffenet-0000.params)|
|
||||
|DenseNet-121| [Repo](https://github.com/miraclewkf/DenseNet), [Symbol](https://raw.githubusercontent.com/miraclewkf/DenseNet/master/model/densenet-121-symbol.json), [Params](https://drive.google.com/file/d/0ByXcv9gLjrVcb3NGb1JPa3ZFQUk/view?usp=drive_web)|
|
||||
|DenseNet-161| [Repo](https://github.com/miraclewkf/DenseNet), [Symbol](https://raw.githubusercontent.com/miraclewkf/DenseNet/master/model/densenet-161-symbol.json), [Params](https://drive.google.com/file/d/0ByXcv9gLjrVcS0FwZ082SEtiUjQ/view)|
|
||||
|DenseNet-169| [Repo](https://github.com/miraclewkf/DenseNet), [Symbol](https://raw.githubusercontent.com/miraclewkf/DenseNet/master/model/densenet-169-symbol.json), [Params](https://drive.google.com/file/d/0ByXcv9gLjrVcOWZJejlMOWZvZmc/view)|
|
||||
|DenseNet-201| [Repo](https://github.com/miraclewkf/DenseNet), [Symbol](https://raw.githubusercontent.com/miraclewkf/DenseNet/master/model/densenet-201-symbol.json), [Params](https://drive.google.com/file/d/0ByXcv9gLjrVcUjF4MDBwZ3FQbkU/view)|
|
||||
|MobileNet| [Repo](https://github.com/KeyKy/mobilenet-mxnet), [Symbol](https://github.com/KeyKy/mobilenet-mxnet/blob/master/mobilenet.py), [Params](https://github.com/KeyKy/mobilenet-mxnet/blob/master/mobilenet-0000.params)|
|
||||
|SSD-ResNet-50| [Repo](https://github.com/zhreshold/mxnet-ssd), [Symbol + Params](https://github.com/zhreshold/mxnet-ssd/releases/download/v0.6/resnet50_ssd_512_voc0712_trainval.zip)|
|
||||
|SSD-VGG-16-300| [Repo](https://github.com/zhreshold/mxnet-ssd), [Symbol + Params](https://github.com/zhreshold/mxnet-ssd/releases/download/v0.5-beta/vgg16_ssd_300_voc0712_trainval.zip)|
|
||||
|SSD-Inception v3| [Repo](https://github.com/zhreshold/mxnet-ssd), [Symbol + Params](https://github.com/zhreshold/mxnet-ssd/releases/download/v0.7-alpha/ssd_inceptionv3_512_voc0712trainval.zip)|
|
||||
|FCN8 (Semantic Segmentation)| [Repo](https://github.com/apache/incubator-mxnet/tree/master/example/fcn-xs), [Symbol](https://www.dropbox.com/sh/578n5cxej7ofd6m/AAA9SFCBN8R_uL2CnAd3WQ5ia/FCN8s_VGG16-symbol.json?dl=0), [Params](https://www.dropbox.com/sh/578n5cxej7ofd6m/AABHWZHCtA2P6iR6LUflkxb_a/FCN8s_VGG16-0019-cpu.params?dl=0)|
|
||||
|MTCNN part 1 (Face Detection)| [Repo](https://github.com/pangyupo/mxnet_mtcnn_face_detection), [Symbol](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det1-symbol.json), [Params](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det1-0001.params)|
|
||||
|MTCNN part 2 (Face Detection)| [Repo](https://github.com/pangyupo/mxnet_mtcnn_face_detection), [Symbol](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det2-symbol.json), [Params](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det2-0001.params)|
|
||||
|MTCNN part 3 (Face Detection)| [Repo](https://github.com/pangyupo/mxnet_mtcnn_face_detection), [Symbol](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det3-symbol.json), [Params](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det3-0001.params)|
|
||||
|MTCNN part 4 (Face Detection)| [Repo](https://github.com/pangyupo/mxnet_mtcnn_face_detection), [Symbol](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det4-symbol.json), [Params](https://github.com/pangyupo/mxnet_mtcnn_face_detection/blob/master/model/det4-0001.params)|
|
||||
|Lightened_moon| [Repo](https://github.com/tornadomeet/mxnet-face/tree/master/model/lightened_moon), [Symbol](https://github.com/tornadomeet/mxnet-face/blob/master/model/lightened_moon/lightened_moon_fuse-symbol.json), [Params](https://github.com/tornadomeet/mxnet-face/blob/master/model/lightened_moon/lightened_moon_fuse-0082.params)|
|
||||
|RNN-Transducer| [Repo](https://github.com/HawkAaron/mxnet-transducer) |
|
||||
|word_lm| [Repo](https://github.com/apache/incubator-mxnet/tree/master/example/rnn/word_lm) |
|
||||
|
||||
**Other supported topologies**
|
||||
|
||||
* [GluonCV SSD and YOLO-v3 models](https://gluon-cv.mxnet.io/model_zoo/detection.html) can be converted using the following [instructions](mxnet_specific/Convert_GluonCV_Models.md).
|
||||
* [Style transfer model](https://github.com/zhaw/neural_style) can be converted using the following [instructions](mxnet_specific/Convert_Style_Transfer_From_MXNet.md).
|
||||
|
||||
## Convert an MXNet* Model <a name="ConvertMxNet"></a>
|
||||
|
||||
To convert an MXNet\* model, run Model Optimizer with a path to the input model `.params` file and to an output directory where you have write permissions:
|
||||
@ -114,3 +67,6 @@ In this document, you learned:
|
||||
* Basic information about how the Model Optimizer works with MXNet\* models
|
||||
* Which MXNet\* models are supported
|
||||
* How to convert a trained MXNet\* model using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -1,75 +1,9 @@
|
||||
# Converting a ONNX* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _convert model onnx:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_DLRM
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Introduction to ONNX
|
||||
|
||||
[ONNX*](https://github.com/onnx/onnx) is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch\*, Caffe2\*, Apache MXNet\*, Microsoft Cognitive Toolkit\* and other tools are developing ONNX support.
|
||||
|
||||
## Supported Public ONNX Topologies
|
||||
| Model Name | Path to <a href="https://github.com/onnx/models">Public Models</a> master branch|
|
||||
|:----|:----|
|
||||
| bert_large | [model archive](https://github.com/mlperf/inference/tree/master/v0.7/language/bert) |
|
||||
| bvlc_alexnet | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/bvlc_alexnet.tar.gz) |
|
||||
| bvlc_googlenet | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/bvlc_googlenet.tar.gz) |
|
||||
| bvlc_reference_caffenet | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/bvlc_reference_caffenet.tar.gz) |
|
||||
| bvlc_reference_rcnn_ilsvrc13 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/bvlc_reference_rcnn_ilsvrc13.tar.gz) |
|
||||
| inception_v1 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/inception_v1.tar.gz) |
|
||||
| inception_v2 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/inception_v2.tar.gz) |
|
||||
| resnet50 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz) |
|
||||
| squeezenet | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz) |
|
||||
| densenet121 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/densenet121.tar.gz) |
|
||||
| emotion_ferplus | [model archive](https://www.cntk.ai/OnnxModels/emotion_ferplus/opset_2/emotion_ferplus.tar.gz) |
|
||||
| mnist | [model archive](https://www.cntk.ai/OnnxModels/mnist/opset_1/mnist.tar.gz) |
|
||||
| shufflenet | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/shufflenet.tar.gz) |
|
||||
| VGG19 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/vgg19.tar.gz) |
|
||||
| zfnet512 | [model archive](https://s3.amazonaws.com/download.onnx/models/opset_8/zfnet512.tar.gz) |
|
||||
| GPT-2 | [model archive](https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.tar.gz) |
|
||||
| YOLOv3 | [model archive](https://github.com/onnx/models/blob/master/vision/object_detection_segmentation/yolov3/model/yolov3-10.tar.gz) |
|
||||
|
||||
Listed models are built with the operation set version 8 except the GPT-2 model (which uses version 10). Models that are upgraded to higher operation set versions may not be supported.
|
||||
|
||||
## Supported PaddlePaddle* Models via ONNX Conversion
|
||||
Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion.
|
||||
The list of supported topologies downloadable from PaddleHub is presented below:
|
||||
|
||||
| Model Name | Command to download the model from PaddleHub |
|
||||
|:----|:----|
|
||||
| [MobileNetV2](https://www.paddlepaddle.org.cn/hubdetail?name=mobilenet_v2_imagenet) | `hub install mobilenet_v2_imagenet==1.0.1` |
|
||||
| [ResNet18](https://www.paddlepaddle.org.cn/hubdetail?name=resnet_v2_18_imagenet) | `hub install resnet_v2_18_imagenet==1.0.0` |
|
||||
| [ResNet34](https://www.paddlepaddle.org.cn/hubdetail?name=resnet_v2_34_imagenet) | `hub install resnet_v2_34_imagenet==1.0.0` |
|
||||
| [ResNet50](https://www.paddlepaddle.org.cn/hubdetail?name=resnet_v2_50_imagenet) | `hub install resnet_v2_50_imagenet==1.0.1` |
|
||||
| [ResNet101](https://www.paddlepaddle.org.cn/hubdetail?name=resnet_v2_101_imagenet) | `hub install resnet_v2_101_imagenet==1.0.1` |
|
||||
| [ResNet152](https://www.paddlepaddle.org.cn/hubdetail?name=resnet_v2_152_imagenet) | `hub install resnet_v2_152_imagenet==1.0.1` |
|
||||
> **NOTE**: To convert a model downloaded from PaddleHub use [paddle2onnx](https://github.com/PaddlePaddle/paddle2onnx) converter.
|
||||
|
||||
The list of supported topologies from the [models v1.5](https://github.com/PaddlePaddle/models/tree/release/1.5) package:
|
||||
* [MobileNetV1](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/mobilenet.py)
|
||||
* [MobileNetV2](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/mobilenet_v2.py)
|
||||
* [ResNet](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/resnet.py)
|
||||
* [ResNet_vc](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/resnet_vc.py)
|
||||
* [ResNet_vd](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/resnet_vd.py)
|
||||
* [ResNeXt](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/resnext.py)
|
||||
* [ResNeXt_vd](https://github.com/PaddlePaddle/models/blob/release/1.5/PaddleCV/image_classification/models/resnext_vd.py)
|
||||
|
||||
> **NOTE**: To convert these topologies one should first serialize the model by calling `paddle.fluid.io.save_inference_model`
|
||||
([description](https://www.paddlepaddle.org.cn/documentation/docs/en/1.3/api/io.html#save-inference-model)) command and
|
||||
after that use [paddle2onnx](https://github.com/PaddlePaddle/paddle2onnx) converter.
|
||||
|
||||
## Convert an ONNX* Model <a name="Convert_From_ONNX"></a>
|
||||
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
|
||||
|
||||
@ -81,3 +15,6 @@ There are no ONNX\* specific parameters, so only framework-agnostic parameters a
|
||||
|
||||
## Supported ONNX\* Layers
|
||||
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -7,28 +7,6 @@ A summary of the steps for optimizing and deploying a model trained with PaddleP
|
||||
3. Test the model in the Intermediate Representation format using the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in the target environment via provided [OpenVINO Samples](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in your application to deploy the model in the target environment.
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
| Model Name| Model Type| Description|
|
||||
| ------------- | ------------ | ------------- |
|
||||
|ppocr-det| optical character recognition| Models are exported from [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/). Refer to [READ.md](https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15).|
|
||||
|ppocr-rec| optical character recognition| Models are exported from [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/). Refer to [READ.md](https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.1/#pp-ocr-20-series-model-listupdate-on-dec-15).|
|
||||
|ResNet-50| classification| Models are exported from [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/). Refer to [getting_started_en.md](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict)|
|
||||
|MobileNet v2| classification| Models are exported from [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/). Refer to [getting_started_en.md](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict)|
|
||||
|MobileNet v3| classification| Models are exported from [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.1/). Refer to [getting_started_en.md](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.1/docs/en/tutorials/getting_started_en.md#4-use-the-inference-model-to-predict)|
|
||||
|BiSeNet v2| semantic segmentation| Models are exported from [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1). Refer to [model_export.md](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#)|
|
||||
|DeepLab v3 plus| semantic segmentation| Models are exported from [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1). Refer to [model_export.md](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#)|
|
||||
|Fast-SCNN| semantic segmentation| Models are exported from [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1). Refer to [model_export.md](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#)|
|
||||
|OCRNET| semantic segmentation| Models are exported from [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.1). Refer to [model_export.md](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.1/docs/model_export.md#)|
|
||||
|Yolo v3| detection| Models are exported from [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1). Refer to [EXPORT_MODEL.md](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#).|
|
||||
|ppyolo| detection| Models are exported from [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1). Refer to [EXPORT_MODEL.md](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md#).|
|
||||
|MobileNetv3-SSD| detection| Models are exported from [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.2). Refer to [EXPORT_MODEL.md](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.2/deploy/EXPORT_MODEL.md#).|
|
||||
|U-Net| semantic segmentation| Models are exported from [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.3). Refer to [model_export.md](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/docs/model_export.md#)|
|
||||
|BERT| language representation| Models are exported from [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1). Refer to [README.md](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme)|
|
||||
|ernie| language representation| Models are exported from [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1). Refer to [README.md](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme)|
|
||||
|
||||
> **NOTE:** The verified models are exported from the repository of branch release/2.1.
|
||||
|
||||
## Convert a PaddlePaddle Model <a name="Convert_From_Paddle"></a>
|
||||
|
||||
To convert a PaddlePaddle model:
|
||||
@ -56,3 +34,6 @@ Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
When Model Optimizer is unable to run to completion due to issues like typographical errors, incorrectly used options, etc., it provides explanatory messages. They describe the potential cause of the problem and give a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md), which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -1,49 +1,6 @@
|
||||
# Converting a PyTorch* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
Here is the list of models that are tested and guaranteed to be supported. However, you can also use these instructions to convert PyTorch\* models that are not presented in the list.
|
||||
|
||||
* [Torchvision Models](https://pytorch.org/docs/stable/torchvision/index.html): alexnet, densenet121, densenet161,
|
||||
densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19.
|
||||
The models can be converted using [regular instructions](#typical-pytorch).
|
||||
* [Cadene Pretrained Models](https://github.com/Cadene/pretrained-models.pytorch): alexnet, fbresnet152, resnet101,
|
||||
resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11.
|
||||
The models can be converted using [regular instructions](#typical-pytorch).
|
||||
* [ESPNet Models](https://github.com/sacmehta/ESPNet/tree/master/pretrained) can be converted using [regular instructions](#typical-pytorch).
|
||||
* [MobileNetV3](https://github.com/d-li14/mobilenetv3.pytorch) can be converted using [regular instructions](#typical-pytorch).
|
||||
* [iSeeBetter](https://github.com/amanchadha/iSeeBetter) can be converted using [regular instructions](#typical-pytorch).
|
||||
Please refer to [`iSeeBetterTest.py`](https://github.com/amanchadha/iSeeBetter/blob/master/iSeeBetterTest.py) script for code to initialize the model.
|
||||
* F3Net topology can be converted using steps described in [Convert PyTorch\* F3Net to the IR](pytorch_specific/Convert_F3Net.md)
|
||||
instruction which is used instead of steps 2 and 3 of [regular instructions](#typical-pytorch).
|
||||
* QuartzNet topologies from [NeMo project](https://github.com/NVIDIA/NeMo) can be converted using steps described in
|
||||
[Convert PyTorch\* QuartzNet to the IR](pytorch_specific/Convert_QuartzNet.md) instruction which is used instead of
|
||||
steps 2 and 3 of [regular instructions](#typical-pytorch).
|
||||
* YOLACT topology can be converted using steps described in [Convert PyTorch\* YOLACT to the IR](pytorch_specific/Convert_YOLACT.md)
|
||||
instruction which is used instead of steps 2 and 3 of [regular instructions](#typical-pytorch).
|
||||
* [RCAN](https://github.com/yulunzhang/RCAN) topology can be converted using steps described in [Convert PyTorch\* RCAN to the IR](pytorch_specific/Convert_RCAN.md)
|
||||
instruction which is used instead of steps 2 and 3 of [regular instructions](#typical-pytorch).
|
||||
* [BERT_NER](https://github.com/kamalkraj/BERT-NER) topology can be converted using steps described in [Convert PyTorch* BERT-NER to the IR](pytorch_specific/Convert_Bert_ner.md)
|
||||
instruction which is used instead of steps 2 and 3 of [regular instructions](#typical-pytorch).
|
||||
* ResNeXt-101 from [facebookresearch/semi-supervised-ImageNet1K-models](https://github.com/facebookresearch/semi-supervised-ImageNet1K-models)
|
||||
can be converted using [regular instructions](#typical-pytorch).
|
||||
|
||||
## Typical steps to convert PyTorch\* model <a name="typical-pytorch"></a>
|
||||
## Typical Steps to Convert PyTorch Model <a name="typical-pytorch"></a>
|
||||
|
||||
PyTorch* framework is supported through export to ONNX\* format. A summary of the steps for optimizing and deploying a model that was trained with the PyTorch\* framework:
|
||||
|
||||
@ -51,7 +8,7 @@ PyTorch* framework is supported through export to ONNX\* format. A summary of th
|
||||
2. [Export PyTorch model to ONNX\*](#export-to-onnx).
|
||||
3. [Convert an ONNX\* model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
|
||||
4. Test the model in the Intermediate Representation format using the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in the target environment via provided [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
5. [Integrate OpenVINO Runtime](../../../OV_Runtime_UG/Samples_Overview.md) in your application to deploy the model in the target environment.
|
||||
5. [Integrate OpenVINO Runtime](../../../OV_Runtime_UG/integrate_with_your_application.md) in your application to deploy the model in the target environment.
|
||||
|
||||
## Export PyTorch\* Model to ONNX\* Format <a name="export-to-onnx"></a>
|
||||
|
||||
@ -79,3 +36,6 @@ torch.onnx.export(model, (dummy_input, ), 'model.onnx')
|
||||
* Not all PyTorch\* operations can be exported to ONNX\* opset 9 which is used by default, as of version 1.8.1.
|
||||
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version`
|
||||
option of the `torch.onnx.export`. For more information about ONNX* opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md).
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -1,194 +1,12 @@
|
||||
# Converting a TensorFlow* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _convert model tf:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_RetinaNet_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_AttentionOCR_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_NCF_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_DeepSpeech_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_lm_1b_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_GNMT_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_BERT_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_XLNet_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_WideAndDeep_Family_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
A summary of the steps for optimizing and deploying a model that was trained with the TensorFlow\* framework:
|
||||
|
||||
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for TensorFlow\* (TensorFlow was used to train your model).
|
||||
2. [Freeze the TensorFlow model](#freeze-the-tensorflow-model) if your model is not already frozen or skip this step and use the [instruction](#loading-nonfrozen-models) to a convert a non-frozen model.
|
||||
3. [Convert a TensorFlow\* model](#Convert_From_TF) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
|
||||
4. Test the model in the Intermediate Representation format using the [OpenVINO™ Runtime](../../../OV_Runtime_UG/openvino_intro.md) in the target environment via provided [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
|
||||
5. [Integrate OpenVINO Runtime](../../../OV_Runtime_UG/Samples_Overview.md) in your application to deploy the model in the target environment.
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
**Supported Non-Frozen Topologies with Links to the Associated Slim Model Classification Download Files**
|
||||
|
||||
Detailed information on how to convert models from the <a href="https://github.com/tensorflow/models/tree/master/research/slim/README.md">TensorFlow\*-Slim Image Classification Model Library</a> is available in the [Converting TensorFlow*-Slim Image Classification Model Library Models](tf_specific/Convert_Slim_Library_Models.md) chapter. The table below contains list of supported TensorFlow\*-Slim Image Classification Model Library models and required mean/scale values. The mean values are specified as if the input image is read in BGR channels order layout like OpenVINO classification sample does.
|
||||
|
||||
| Model Name| Slim Model Checkpoint File| \-\-mean_values | \-\-scale|
|
||||
| ------------- | ------------ | ------------- | -----:|
|
||||
|Inception v1| [inception_v1_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|Inception v2| [inception_v1_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|Inception v3| [inception_v3_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|Inception V4| [inception_v4_2016_09_09.tar.gz](http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|Inception ResNet v2| [inception_resnet_v2_2016_08_30.tar.gz](http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|MobileNet v1 128| [mobilenet_v1_0.25_128.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.25_128.tgz)| [127.5,127.5,127.5]| 127.5|
|
||||
|MobileNet v1 160| [mobilenet_v1_0.5_160.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.5_160.tgz)| [127.5,127.5,127.5]| 127.5|
|
||||
|MobileNet v1 224| [mobilenet_v1_1.0_224.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz)| [127.5,127.5,127.5]| 127.5|
|
||||
|NasNet Large| [nasnet-a_large_04_10_2017.tar.gz](https://storage.googleapis.com/download.tensorflow.org/models/nasnet-a_large_04_10_2017.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|NasNet Mobile| [nasnet-a_mobile_04_10_2017.tar.gz](https://storage.googleapis.com/download.tensorflow.org/models/nasnet-a_mobile_04_10_2017.tar.gz)| [127.5,127.5,127.5]| 127.5|
|
||||
|ResidualNet-50 v1| [resnet_v1_50_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|ResidualNet-50 v2| [resnet_v2_50_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|ResidualNet-101 v1| [resnet_v1_101_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|ResidualNet-101 v2| [resnet_v2_101_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_101_2017_04_14.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|ResidualNet-152 v1| [resnet_v1_152_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_152_2016_08_28.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|ResidualNet-152 v2| [resnet_v2_152_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_152_2017_04_14.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|VGG-16| [vgg_16_2016_08_28.tar.gz](http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|VGG-19| [vgg_19_2016_08_28.tar.gz](http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz)| [103.94,116.78,123.68] | 1 |
|
||||
|
||||
**Supported Pre-Trained Topologies from TensorFlow 1 Detection Model Zoo**
|
||||
|
||||
Detailed information on how to convert models from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md">TensorFlow 1 Detection Model Zoo</a> is available in the [Converting TensorFlow Object Detection API Models](tf_specific/Convert_Object_Detection_API_Models.md) chapter. The table below contains models from the Object Detection Models zoo that are supported.
|
||||
|
||||
| Model Name| TensorFlow 1 Object Detection API Models|
|
||||
| :------------- | -----:|
|
||||
|SSD MobileNet V1 COCO\*| [ssd_mobilenet_v1_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz)|
|
||||
|SSD MobileNet V1 0.75 Depth COCO| [ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz)|
|
||||
|SSD MobileNet V1 PPN COCO| [ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03.tar.gz)|
|
||||
|SSD MobileNet V1 FPN COCO| [ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz)|
|
||||
|SSD ResNet50 FPN COCO| [ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz)|
|
||||
|SSD MobileNet V2 COCO| [ssd_mobilenet_v2_coco_2018_03_29.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz)|
|
||||
|SSD Lite MobileNet V2 COCO| [ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz](http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz)|
|
||||
|SSD Inception V2 COCO| [ssd_inception_v2_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz)|
|
||||
|RFCN ResNet 101 COCO| [rfcn_resnet101_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/rfcn_resnet101_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN Inception V2 COCO| [faster_rcnn_inception_v2_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN ResNet 50 COCO| [faster_rcnn_resnet50_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN ResNet 50 Low Proposals COCO| [faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN ResNet 101 COCO| [faster_rcnn_resnet101_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN ResNet 101 Low Proposals COCO| [faster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN Inception ResNet V2 COCO| [faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN Inception ResNet V2 Low Proposals COCO| [faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN NasNet COCO| [faster_rcnn_nas_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_nas_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN NasNet Low Proposals COCO| [faster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz)|
|
||||
|Mask R-CNN Inception ResNet V2 COCO| [mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz)|
|
||||
|Mask R-CNN Inception V2 COCO| [mask_rcnn_inception_v2_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/mask_rcnn_inception_v2_coco_2018_01_28.tar.gz)|
|
||||
|Mask R-CNN ResNet 101 COCO| [mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz)|
|
||||
|Mask R-CNN ResNet 50 COCO| [mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN ResNet 101 Kitti\*| [faster_rcnn_resnet101_kitti_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_kitti_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN Inception ResNet V2 Open Images\*| [faster_rcnn_inception_resnet_v2_atrous_oid_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_oid_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN Inception ResNet V2 Low Proposals Open Images\*| [faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid_2018_01_28.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid_2018_01_28.tar.gz)|
|
||||
|Faster R-CNN ResNet 101 AVA v2.1\*| [faster_rcnn_resnet101_ava_v2.1_2018_04_30.tar.gz](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_ava_v2.1_2018_04_30.tar.gz)|
|
||||
|
||||
**Supported Pre-Trained Topologies from TensorFlow 2 Detection Model Zoo**
|
||||
|
||||
Detailed information on how to convert models from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md">TensorFlow 2 Detection Model Zoo</a> is available in the [Converting TensorFlow Object Detection API Models](tf_specific/Convert_Object_Detection_API_Models.md) chapter. The table below contains models from the Object Detection Models zoo that are supported.
|
||||
|
||||
| Model Name| TensorFlow 2 Object Detection API Models|
|
||||
| :------------- | -----:|
|
||||
| EfficientDet D0 512x512 | [efficientdet_d0_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d0_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D1 640x640 | [efficientdet_d1_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d1_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D2 768x768 | [efficientdet_d2_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d2_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D3 896x896 | [efficientdet_d3_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d3_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D4 1024x1024 | [efficientdet_d4_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d4_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D5 1280x1280 | [efficientdet_d5_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d5_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D6 1280x1280 | [efficientdet_d6_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d6_coco17_tpu-32.tar.gz)|
|
||||
| EfficientDet D7 1536x1536 | [efficientdet_d7_coco17_tpu-32.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d7_coco17_tpu-32.tar.gz)|
|
||||
| SSD MobileNet v2 320x320 | [ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz)|
|
||||
| SSD MobileNet V1 FPN 640x640 | [ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8.tar.gz)|
|
||||
| SSD MobileNet V2 FPNLite 320x320 | [ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz)|
|
||||
| SSD MobileNet V2 FPNLite 640x640 | [ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz)|
|
||||
| SSD ResNet50 V1 FPN 640x640 (RetinaNet50) | [ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz)|
|
||||
| SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50) | [ssd_resnet50_v1_fpn_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| SSD ResNet101 V1 FPN 640x640 (RetinaNet101) | [ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz)|
|
||||
| SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101) | [ssd_resnet101_v1_fpn_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| SSD ResNet152 V1 FPN 640x640 (RetinaNet152) | [ssd_resnet152_v1_fpn_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet152_v1_fpn_640x640_coco17_tpu-8.tar.gz)|
|
||||
| SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152) | [ssd_resnet152_v1_fpn_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet152_v1_fpn_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet50 V1 640x640 | [faster_rcnn_resnet50_v1_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet50_v1_640x640_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet50 V1 1024x1024 | [faster_rcnn_resnet50_v1_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet50_v1_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet50 V1 800x1333 | [faster_rcnn_resnet50_v1_800x1333_coco17_gpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet50_v1_800x1333_coco17_gpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet101 V1 640x640 | [faster_rcnn_resnet101_v1_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet101_v1_640x640_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet101 V1 1024x1024 | [faster_rcnn_resnet101_v1_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet101_v1_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet101 V1 800x1333 | [faster_rcnn_resnet101_v1_800x1333_coco17_gpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet101_v1_800x1333_coco17_gpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet152 V1 640x640 | [faster_rcnn_resnet152_v1_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet152_v1_640x640_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet152 V1 1024x1024 | [faster_rcnn_resnet152_v1_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet152_v1_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN ResNet152 V1 800x1333 | [faster_rcnn_resnet152_v1_800x1333_coco17_gpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_resnet152_v1_800x1333_coco17_gpu-8.tar.gz)|
|
||||
| Faster R-CNN Inception ResNet V2 640x640 | [faster_rcnn_inception_resnet_v2_640x640_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_inception_resnet_v2_640x640_coco17_tpu-8.tar.gz)|
|
||||
| Faster R-CNN Inception ResNet V2 1024x1024 | [faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8.tar.gz)|
|
||||
| Mask R-CNN Inception ResNet V2 1024x1024 | [mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.tar.gz](http://download.tensorflow.org/models/object_detection/tf2/20200711/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.tar.gz)|
|
||||
|
||||
**Supported Frozen Quantized Topologies**
|
||||
|
||||
The topologies hosted on the TensorFlow\* Lite [site](https://www.tensorflow.org/lite/guide/hosted_models). The frozen model file (`.pb` file) should be fed to the Model Optimizer.
|
||||
|
||||
| Model Name | Frozen Model File |
|
||||
|:----------------------|---------------------------------------------------------------------------------------------------------------------------------:|
|
||||
| Mobilenet V1 0.25 128 | [mobilenet_v1_0.25_128_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_128_quant.tgz) |
|
||||
| Mobilenet V1 0.25 160 | [mobilenet_v1_0.25_160_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_160_quant.tgz) |
|
||||
| Mobilenet V1 0.25 192 | [mobilenet_v1_0.25_192_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_192_quant.tgz) |
|
||||
| Mobilenet V1 0.25 224 | [mobilenet_v1_0.25_224_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_224_quant.tgz) |
|
||||
| Mobilenet V1 0.50 128 | [mobilenet_v1_0.5_128_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_128_quant.tgz) |
|
||||
| Mobilenet V1 0.50 160 | [mobilenet_v1_0.5_160_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_160_quant.tgz) |
|
||||
| Mobilenet V1 0.50 192 | [mobilenet_v1_0.5_192_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_192_quant.tgz) |
|
||||
| Mobilenet V1 0.50 224 | [mobilenet_v1_0.5_224_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.5_224_quant.tgz) |
|
||||
| Mobilenet V1 0.75 128 | [mobilenet_v1_0.75_128_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_128_quant.tgz) |
|
||||
| Mobilenet V1 0.75 160 | [mobilenet_v1_0.75_160_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_160_quant.tgz) |
|
||||
| Mobilenet V1 0.75 192 | [mobilenet_v1_0.75_192_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_192_quant.tgz) |
|
||||
| Mobilenet V1 0.75 224 | [mobilenet_v1_0.75_224_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.75_224_quant.tgz) |
|
||||
| Mobilenet V1 1.0 128 | [mobilenet_v1_1.0_128_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_128_quant.tgz) |
|
||||
| Mobilenet V1 1.0 160 | [mobilenet_v1_1.0_160_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_160_quant.tgz) |
|
||||
| Mobilenet V1 1.0 192 | [mobilenet_v1_1.0_192_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_192_quant.tgz) |
|
||||
| Mobilenet V1 1.0 224 | [mobilenet_v1_1.0_224_quant.tgz](http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz) |
|
||||
| Mobilenet V2 1.0 224 | [mobilenet_v2_1.0_224_quant.tgz](http://download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgz) |
|
||||
| Inception V1 | [inception_v1_224_quant_20181026.tgz](http://download.tensorflow.org/models/inception_v1_224_quant_20181026.tgz) |
|
||||
| Inception V2 | [inception_v2_224_quant_20181026.tgz](http://download.tensorflow.org/models/inception_v2_224_quant_20181026.tgz) |
|
||||
| Inception V3 | [inception_v3_quant.tgz](http://download.tensorflow.org/models/tflite_11_05_08/inception_v3_quant.tgz) |
|
||||
| Inception V4 | [inception_v4_299_quant_20181026.tgz](http://download.tensorflow.org/models/inception_v4_299_quant_20181026.tgz) |
|
||||
|
||||
It is necessary to specify the following command line parameters for the Model Optimizer to convert some of the models from the list above: `--input input --input_shape [1,HEIGHT,WIDTH,3]`.
|
||||
Where `HEIGHT` and `WIDTH` are the input images height and width for which the model was trained.
|
||||
|
||||
**Other supported topologies**
|
||||
|
||||
| Model Name| Repository |
|
||||
| :------------- | -----:|
|
||||
| ResNext | [Repo](https://github.com/taki0112/ResNeXt-Tensorflow)|
|
||||
| DenseNet | [Repo](https://github.com/taki0112/Densenet-Tensorflow)|
|
||||
| CRNN | [Repo](https://github.com/MaybeShewill-CV/CRNN_Tensorflow) |
|
||||
| NCF | [Repo](https://github.com/tensorflow/models/tree/master/official/recommendation) |
|
||||
| lm_1b | [Repo](https://github.com/tensorflow/models/tree/master/research/lm_1b) |
|
||||
| DeepSpeech | [Repo](https://github.com/mozilla/DeepSpeech) |
|
||||
| A3C | [Repo](https://github.com/miyosuda/async_deep_reinforce) |
|
||||
| VDCNN | [Repo](https://github.com/WenchenLi/VDCNN) |
|
||||
| Unet | [Repo](https://github.com/kkweon/UNet-in-Tensorflow) |
|
||||
| Keras-TCN | [Repo](https://github.com/philipperemy/keras-tcn) |
|
||||
| PRNet | [Repo](https://github.com/YadiraF/PRNet) |
|
||||
| YOLOv4 | [Repo](https://github.com/Ma-Dan/keras-yolo4) |
|
||||
| STN | [Repo](https://github.com/oarriaga/STN.keras) |
|
||||
|
||||
* YOLO topologies from DarkNet* can be converted using [these instructions](tf_specific/Convert_YOLO_From_Tensorflow.md).
|
||||
* FaceNet topologies can be converted using [these instructions](tf_specific/Convert_FaceNet_From_Tensorflow.md).
|
||||
* CRNN topologies can be converted using [these instructions](tf_specific/Convert_CRNN_From_Tensorflow.md).
|
||||
* NCF topologies can be converted using [these instructions](tf_specific/Convert_NCF_From_Tensorflow.md).
|
||||
* [GNMT](https://github.com/tensorflow/nmt) topology can be converted using [these instructions](tf_specific/Convert_GNMT_From_Tensorflow.md).
|
||||
* [BERT](https://github.com/google-research/bert) topology can be converted using [these instructions](tf_specific/Convert_BERT_From_Tensorflow.md).
|
||||
* [XLNet](https://github.com/zihangdai/xlnet) topology can be converted using [these instructions](tf_specific/Convert_XLNet_From_Tensorflow.md).
|
||||
* [Attention OCR](https://github.com/emedvedev/attention-ocr) topology can be converted using [these instructions](tf_specific/Convert_AttentionOCR_From_Tensorflow.md).
|
||||
|
||||
5. [Integrate OpenVINO Runtime](../../../OV_Runtime_UG/integrate_with_your_application.md) in your application to deploy the model in the target environment.
|
||||
|
||||
## Loading Non-Frozen Models to the Model Optimizer <a name="loading-nonfrozen-models"></a>
|
||||
|
||||
@ -339,16 +157,6 @@ TensorFlow*-specific parameters:
|
||||
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1 --output_dir <OUTPUT_MODEL_DIR>
|
||||
```
|
||||
|
||||
* Launching the Model Optimizer for Inception V1 frozen model and update custom sub-graph replacement file `transform.json` with information about input and output nodes of the matched sub-graph, specifying a writable output directory. For more information about this feature, refer to [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
|
||||
```sh
|
||||
mo --input_model inception_v1.pb -b 1 --tensorflow_custom_operations_config_update transform.json --output_dir <OUTPUT_MODEL_DIR>
|
||||
```
|
||||
|
||||
* Launching the Model Optimizer for Inception V1 frozen model and use custom sub-graph replacement file `transform.json` for model conversion. For more information about this feature, refer to [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
|
||||
```sh
|
||||
mo --input_model inception_v1.pb -b 1 --transformations_config transform.json --output_dir <OUTPUT_MODEL_DIR>
|
||||
```
|
||||
|
||||
* Launching the Model Optimizer for Inception V1 frozen model and dump information about the graph to TensorBoard log dir `/tmp/log_dir`
|
||||
```sh
|
||||
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir --output_dir <OUTPUT_MODEL_DIR>
|
||||
@ -451,3 +259,6 @@ In this document, you learned:
|
||||
* Which TensorFlow models are supported
|
||||
* How to freeze a TensorFlow model
|
||||
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options
|
||||
|
||||
## See Also
|
||||
[Model Conversion Tutorials](Convert_Model_Tutorials.md)
|
||||
|
@ -0,0 +1,41 @@
|
||||
# Model Conversion Tutorials {#openvino_docs_MO_DG_prepare_model_convert_model_tutorials}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_AttentionOCR_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_BERT_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_DeepSpeech_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_GNMT_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_lm_1b_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_NCF_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_RetinaNet_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_WideAndDeep_Family_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_XLNet_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
This section provides you with a set of tutorials that demonstrate conversion steps for specific TensorFlow, ONNX, PyTorch, MXNet and Kaldi models.
|
||||
|
||||
You can also find a collection of [Python tutorials](../../../tutorials.md) written for running on Jupyter* notebooks that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for optimized deep learning inference.
|
@ -1,296 +1,32 @@
|
||||
# Converting a Model to Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
|
||||
# General Conversion Parameters {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
|
||||
openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_IR_suitable_for_INT8_inference
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Subgraph_Replacement_Model_Optimizer
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Legacy_IR_Layers_Catalog_Spec
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
To convert the model to the Intermediate Representation (IR), run Model Optimizer using the following command:
|
||||
To get the full list of general (framework-agnostic) conversion parameters available in Model Optimizer, run the following command:
|
||||
|
||||
```sh
|
||||
mo --input_model INPUT_MODEL
|
||||
mo --help
|
||||
```
|
||||
|
||||
The output directory must have write permissions, so you can run Model Optimizer from the output directory or specify an output path with the `--output_dir` option.
|
||||
Paragraphs below provide useful details on relevant parameters.
|
||||
|
||||
> **NOTE**: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For details, refer to [When to Reverse Input Channels](#when_to_reverse_input_channels).
|
||||
|
||||
To adjust the conversion process, you may use general parameters defined in the [General Conversion Parameters](#general_conversion_parameters) and
|
||||
Framework-specific parameters for:
|
||||
* [Caffe](Convert_Model_From_Caffe.md)
|
||||
* [TensorFlow](Convert_Model_From_TensorFlow.md)
|
||||
* [MXNet](Convert_Model_From_MxNet.md)
|
||||
* [ONNX](Convert_Model_From_ONNX.md)
|
||||
* [PaddlePaddle](Convert_Model_From_Paddle.md)
|
||||
* [Kaldi](Convert_Model_From_Kaldi.md)
|
||||
|
||||
|
||||
## General Conversion Parameters
|
||||
|
||||
To adjust the conversion process, you can also use the general (framework-agnostic) parameters:
|
||||
|
||||
```sh
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--framework {tf,caffe,mxnet,kaldi,onnx}
|
||||
Name of the framework used to train the input model.
|
||||
|
||||
Framework-agnostic parameters:
|
||||
--input_model INPUT_MODEL, -w INPUT_MODEL, -m INPUT_MODEL
|
||||
Tensorflow*: a file with a pre-trained model (binary
|
||||
or text .pb file after freezing). Caffe*: a model
|
||||
proto file with model weights
|
||||
--model_name MODEL_NAME, -n MODEL_NAME
|
||||
Model_name parameter passed to the final create_ir
|
||||
transform. This parameter is used to name a network in
|
||||
a generated IR and output .xml/.bin files.
|
||||
--output_dir OUTPUT_DIR, -o OUTPUT_DIR
|
||||
Directory that stores the generated IR. By default, it
|
||||
is the directory from where the Model Optimizer is
|
||||
launched.
|
||||
--input_shape INPUT_SHAPE
|
||||
Input shape(s) that should be fed to an input node(s)
|
||||
of the model. Shape is defined as a comma-separated
|
||||
list of integer numbers enclosed in parentheses or
|
||||
square brackets, for example [1,3,227,227] or
|
||||
(1,227,227,3), where the order of dimensions depends
|
||||
on the framework input layout of the model. For
|
||||
example, [N,C,H,W] is used for ONNX* models and
|
||||
[N,H,W,C] for TensorFlow* models. The shape can contain
|
||||
undefined dimensions (? or -1) and should fit the dimensions
|
||||
defined in the input operation of the graph. Boundaries
|
||||
of undefined dimension can be specified with ellipsis,
|
||||
for example [1,1..10,128,128]. One boundary can be undefined,
|
||||
for example [1,..100] or [1,3,1..,1..]. If there
|
||||
are multiple inputs in the model, --input_shape should
|
||||
contain definition of shape for each input separated
|
||||
by a comma, for example: [1,3,227,227],[2,4] for a
|
||||
model with two inputs with 4D and 2D shapes.
|
||||
Alternatively, specify shapes with the --input
|
||||
option.
|
||||
--scale SCALE, -s SCALE
|
||||
All input values coming from original network inputs
|
||||
will be divided by this value. When a list of inputs
|
||||
is overridden by the --input parameter, this scale is
|
||||
not applied for any input that does not match with the
|
||||
original input of the model.
|
||||
If both --mean and --scale are specified,
|
||||
the mean is subtracted first and then scale is applied
|
||||
regardless of the order of options in command line.
|
||||
--reverse_input_channels
|
||||
Switch the input channels order from RGB to BGR (or
|
||||
vice versa). Applied to original inputs of the model
|
||||
if and only if a number of channels equals 3.
|
||||
When --mean_values/--scale_values are also specified,
|
||||
reversing of channels will be applied to user's input
|
||||
data first, so that numbers in --mean_values and
|
||||
--scale_values go in the order of channels used in
|
||||
the original model. In other words, if both options are
|
||||
specified then the data flow in the model looks as following:
|
||||
Parameter -> ReverseInputChannels -> Mean/Scale apply -> the original body of the model.
|
||||
--log_level {CRITICAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET}
|
||||
Logger level
|
||||
--input INPUT Quoted list of comma-separated input nodes names with shapes,
|
||||
data types, and values for freezing. The order of inputs in converted
|
||||
model is the same as order of specified operation names. The shape and value are
|
||||
specified as space-separated lists. The data type of input
|
||||
node is specified in braces and can have one of the values:
|
||||
f64 (float64), f32 (float32), f16 (float16), i64 (int64),
|
||||
i32 (int32), u8 (uint8), boolean (bool). Data type is optional.
|
||||
If it's not specified explicitly then there are two options:
|
||||
if input node is a parameter, data type is taken from the
|
||||
original node dtype, if input node is not a parameter, data type
|
||||
is set to f32. Example, to set `input_1` with shape [1 100],
|
||||
and Parameter node `sequence_len` with scalar input with value `150`,
|
||||
and boolean input `is_training` with `False` value use the
|
||||
following format: "input_1[1 10],sequence_len->150,is_training->False".
|
||||
Another example, use the following format to set input port 0
|
||||
of the node `node_name1` with the shape [3 4] as an input node
|
||||
and freeze output port 1 of the node `node_name2` with the
|
||||
value [20 15] of the int32 type and shape [2]:
|
||||
"0:node_name1[3 4],node_name2:1[2]{i32}->[20 15]".
|
||||
--output OUTPUT The name of the output operation of the model. For
|
||||
TensorFlow*, do not add :0 to this name.
|
||||
The order of outputs in converted model is the same as order of
|
||||
specified operation names.
|
||||
--mean_values MEAN_VALUES, -ms MEAN_VALUES
|
||||
Mean values to be used for the input image per
|
||||
channel. Values to be provided in the (R,G,B) or
|
||||
[R,G,B] format. Can be defined for desired input of
|
||||
the model, for example: "--mean_values
|
||||
data[255,255,255],info[255,255,255]". The exact
|
||||
meaning and order of channels depend on how the
|
||||
original model was trained.
|
||||
--scale_values SCALE_VALUES
|
||||
Scale values to be used for the input image per
|
||||
channel. Values are provided in the (R,G,B) or [R,G,B]
|
||||
format. Can be defined for desired input of the model,
|
||||
for example: "--scale_values
|
||||
data[255,255,255],info[255,255,255]". The exact
|
||||
meaning and order of channels depend on how the
|
||||
original model was trained.
|
||||
If both --mean_values and --scale_values are specified,
|
||||
the mean is subtracted first and then scale is applied
|
||||
regardless of the order of options in command line.
|
||||
--data_type {FP16,FP32,half,float}
|
||||
Data type for all intermediate tensors and weights. If
|
||||
original model is in FP32 and --data_type=FP16 is
|
||||
specified, all model weights and biases are compressed
|
||||
to FP16.
|
||||
--disable_fusing [DEPRECATED] Turn off fusing of linear operations to Convolution.
|
||||
--disable_resnet_optimization
|
||||
[DEPRECATED] Turn off ResNet optimization.
|
||||
--finegrain_fusing FINEGRAIN_FUSING
|
||||
[DEPRECATED] Regex for layers/operations that won't be fused.
|
||||
Example: --finegrain_fusing Convolution1,.*Scale.*
|
||||
--enable_concat_optimization
|
||||
Turn on Concat optimization.
|
||||
--extensions EXTENSIONS
|
||||
Directory or a comma separated list of directories
|
||||
with extensions. To disable all extensions including
|
||||
those that are placed at the default location, pass an
|
||||
empty string.
|
||||
--batch BATCH, -b BATCH
|
||||
Input batch size
|
||||
--version Version of Model Optimizer
|
||||
--silent Prevent any output messages except those that
|
||||
correspond to log level equals ERROR, that can be set
|
||||
with the following option: --log_level. By default,
|
||||
log level is already ERROR.
|
||||
--freeze_placeholder_with_value FREEZE_PLACEHOLDER_WITH_VALUE
|
||||
Replaces input layer with constant node with provided
|
||||
value, for example: "node_name->True". It will be
|
||||
DEPRECATED in future releases. Use --input option to
|
||||
specify a value for freezing.
|
||||
--static_shape Enables IR generation for fixed input shape (folding
|
||||
`ShapeOf` operations and shape-calculating sub-graphs
|
||||
to `Constant`). Changing model input shape using
|
||||
the OpenVINO Runtime API in runtime may fail for such an IR.
|
||||
--disable_weights_compression
|
||||
[DEPRECATED] Disable compression and store weights with original
|
||||
precision.
|
||||
--progress Enable model conversion progress display.
|
||||
--stream_output Switch model conversion progress display to a
|
||||
multiline mode.
|
||||
--transformations_config TRANSFORMATIONS_CONFIG
|
||||
Use the configuration file with transformations
|
||||
description.
|
||||
--use_new_frontend Force the usage of new Frontend of Model Optimizer for model conversion into IR.
|
||||
The new Frontend is C++ based and is available for ONNX* and PaddlePaddle* models.
|
||||
Model optimizer uses new Frontend for ONNX* and PaddlePaddle* by default that means
|
||||
`--use_new_frontend` and `--use_legacy_frontend` options are not specified.
|
||||
--use_legacy_frontend Force the usage of legacy Frontend of Model Optimizer for model conversion into IR.
|
||||
The legacy Frontend is Python based and is available for TensorFlow*, ONNX*, MXNet*,
|
||||
Caffe*, and Kaldi* models.
|
||||
```
|
||||
|
||||
The sections below provide details on using particular parameters and examples of CLI commands.
|
||||
|
||||
## When to Specify Mean and Scale Values
|
||||
Usually neural network models are trained with the normalized input data. This means that the input data values are converted to be in a specific range, for example, `[0, 1]` or `[-1, 1]`. Sometimes the mean values (mean images) are subtracted from the input data values as part of the pre-processing. There are two cases how the input data pre-processing is implemented.
|
||||
* The input pre-processing operations are a part of a topology. In this case, the application that uses the framework to infer the topology does not pre-process the input.
|
||||
* The input pre-processing operations are not a part of a topology and the pre-processing is performed within the application which feeds the model with an input data.
|
||||
|
||||
In the first case, the Model Optimizer generates the IR with required pre-processing operations and OpenVINO Samples may be used to infer the model.
|
||||
|
||||
In the second case, information about mean/scale values should be provided to the Model Optimizer to embed it to the generated IR. Model Optimizer provides a number of command line parameters to specify them: `--mean`, `--scale`, `--scale_values`, `--mean_values`.
|
||||
|
||||
> **NOTE:** If both mean and scale values are specified, the mean is subtracted first and then scale is applied regardless of the order of options in command line. Input values are *divided* by the scale value(s). If also `--reverse_input_channels` option is used, the reverse_input_channels will be applied first, then mean and after that scale.
|
||||
|
||||
There is no a universal recipe for determining the mean/scale values for a particular model. The steps below could help to determine them:
|
||||
* Read the model documentation. Usually the documentation describes mean/scale value if the pre-processing is required.
|
||||
* Open the example script/application executing the model and track how the input data is read and passed to the framework.
|
||||
* Open the model in a visualization tool and check for layers performing subtraction or multiplication (like `Sub`, `Mul`, `ScaleShift`, `Eltwise` etc) of the input data. If such layers exist, pre-processing is probably part of the model.
|
||||
|
||||
## When to Specify Input Shapes <a name="when_to_specify_input_shapes"></a>
|
||||
## When to Specify --input_shape Command Line Parameter <a name="when_to_specify_input_shapes"></a>
|
||||
There are situations when Model Optimizer is unable to deduce input shapes of the model, for example, in case of model cutting due to unsupported operations.
|
||||
The solution is to provide input shapes of a static rank explicitly.
|
||||
|
||||
## When to Reverse Input Channels <a name="when_to_reverse_input_channels"></a>
|
||||
Input data for your application can be of RGB or BRG color input order. For example, OpenVINO Samples load input images in the BGR channels order. However, the model may be trained on images loaded with the opposite order (for example, most TensorFlow\* models are trained with images in RGB order). In this case, inference results using the OpenVINO samples may be incorrect. The solution is to provide `--reverse_input_channels` command line parameter. Taking this parameter, the Model Optimizer performs first convolution or other channel dependent operation weights modification so these operations output will be like the image is passed with RGB channels order.
|
||||
|
||||
## When to Specify `--static_shape` Command Line Parameter
|
||||
## When to Specify --static_shape Command Line Parameter
|
||||
If the `--static_shape` command line parameter is specified the Model Optimizer evaluates shapes of all operations in the model (shape propagation) for a fixed input(s) shape(s). During the shape propagation the Model Optimizer evaluates operations *Shape* and removes them from the computation graph. With that approach, the initial model which can consume inputs of different shapes may be converted to IR working with the input of one fixed shape only. For example, consider the case when some blob is reshaped from 4D of a shape *[N, C, H, W]* to a shape *[N, C, H \* W]*. During the model conversion the Model Optimize calculates output shape as a constant 1D blob with values *[N, C, H \* W]*. So if the input shape changes to some other value *[N,C,H1,W1]* (it is possible scenario for a fully convolutional model) then the reshape layer becomes invalid.
|
||||
Resulting Intermediate Representation will not be resizable with the help of OpenVINO Runtime API.
|
||||
|
||||
## Examples of CLI Commands
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with debug log level:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --log_level DEBUG
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with the output IR called `result.*` in the specified `output_dir`:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --model_name result --output_dir <OUTPUT_MODEL_DIR>
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with one input with scale values:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --scale_values [59,59,59]
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with multiple inputs with scale values:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --input data,rois --scale_values [59,59,59],[5,5,5]
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with multiple inputs with scale and mean values specified for the particular nodes:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --input data,rois --mean_values data[59,59,59] --scale_values rois[5,5,5]
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with specified input layer, overridden input shape, scale 5, batch 8 and specified name of an output operation:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --input data --output pool5 -s 5 -b 8
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with reversed input channels order between RGB and BGR, specified mean values to be used for the input image per channel and specified data type for input tensor values:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --reverse_input_channels --mean_values [255,255,255] --data_type FP16
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for the Caffe bvlc_alexnet model with extensions listed in specified directories, specified mean_images binaryproto
|
||||
file. For more information about extensions, please refer to the [OpenVINO™ Extensibility Mechanism](../../../Extensibility_UG/Intro.md).
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --extensions /home/,/some/other/path/ --mean_file /path/to/binaryproto
|
||||
```
|
||||
|
||||
Launch the Model Optimizer for TensorFlow* FaceNet* model with a placeholder freezing value.
|
||||
It replaces the placeholder with a constant layer that contains the passed value.
|
||||
For more information about FaceNet conversion, please refer to [this](tf_specific/Convert_FaceNet_From_Tensorflow.md) page.
|
||||
```sh
|
||||
mo --input_model FaceNet.pb --input "phase_train->False"
|
||||
```
|
||||
Launch the Model Optimizer for any model with a placeholder freezing tensor of values.
|
||||
It replaces the placeholder with a constant layer that contains the passed values.
|
||||
|
||||
Tensor here is represented in square brackets with each value separated from another by a whitespace.
|
||||
If data type is set in the model, this tensor will be reshaped to a placeholder shape and casted to placeholder data type.
|
||||
Otherwise, it will be casted to data type passed to `--data_type` parameter (by default, it is FP32).
|
||||
```sh
|
||||
mo --input_model FaceNet.pb --input "placeholder_layer_name->[0.1 1.2 2.3]"
|
||||
```
|
||||
|
||||
## Parameters for Pre-Processing
|
||||
Input data may require pre-processing such as `RGB<->BGR` conversion and mean and scale normalization. To learn about Model Optimizer parameters used for pre-processing, refer to [Optimize Preprocessing Computation](../Additional_Optimizations.md).
|
||||
|
||||
## See Also
|
||||
* [Configuring the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
* [IR Notation Reference](../../IR_and_opsets.md)
|
||||
* [Model Optimizer Extensibility](../customize_model_optimizer/Customize_Model_Optimizer.md)
|
||||
* [Model Cutting](Cutting_Model.md)
|
||||
* [Optimize Preprocessing Computation](../Additional_Optimizations.md)
|
||||
* [Convert TensorFlow Models](Convert_Model_From_TensorFlow.md)
|
||||
* [Convert ONNX Models](Convert_Model_From_ONNX.md)
|
||||
* [Convert PyTorch Models](Convert_Model_From_PyTorch.md)
|
||||
* [Convert PaddlePaddle Models](Convert_Model_From_Paddle.md)
|
||||
* [Convert MXNet Models](Convert_Model_From_MxNet.md)
|
||||
* [Convert Caffe Models](Convert_Model_From_Caffe.md)
|
||||
* [Convert Kaldi Models](Convert_Model_From_Kaldi.md)
|
File diff suppressed because it is too large
Load Diff
@ -1,4 +1,4 @@
|
||||
# Convert Kaldi* ASpIRE Chain Time Delay Neural Network (TDNN) Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model}
|
||||
# Convert Kaldi* ASpIRE Chain Time Delay Neural Network (TDNN) Model {#openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model}
|
||||
|
||||
You can [download a pre-trained model](https://kaldi-asr.org/models/1/0001_aspire_chain_model.tar.gz)
|
||||
for the ASpIRE Chain Time Delay Neural Network (TDNN) from the Kaldi* project official website.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting GluonCV* Models {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models}
|
||||
# Convert MXNet GluonCV* Models {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models}
|
||||
|
||||
This document provides the instructions and examples on how to use Model Optimizer to convert [GluonCV SSD and YOLO-v3 models](https://gluon-cv.mxnet.io/model_zoo/detection.html) to IR.
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting a Style Transfer Model from MXNet* {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet}
|
||||
# Convert MXNet Style Transfer Model {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet}
|
||||
|
||||
The tutorial explains how to generate a model for style transfer using the public MXNet\* neural style transfer sample.
|
||||
To use the style transfer sample from OpenVINO™, follow the steps below as no public pre-trained style transfer model is provided with the OpenVINO toolkit.
|
||||
|
@ -1,32 +0,0 @@
|
||||
[DEPRECATED] Convert ONNX* DLRM to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_DLRM}
|
||||
===============================
|
||||
|
||||
> **NOTE**: These instructions are currently deprecated. Since OpenVINO™ 2020.4 version, no specific steps are needed to convert ONNX\* DLRM models. For general instructions on converting ONNX models, please refer to [Converting a ONNX* Model](../Convert_Model_From_ONNX.md) topic.
|
||||
|
||||
These instructions are applicable only to the DLRM converted to the ONNX* file format from the [facebookresearch/dlrm model](https://github.com/facebookresearch/dlrm).
|
||||
|
||||
**Step 1**. Save trained Pytorch* model to ONNX* format or download pretrained ONNX* from
|
||||
[MLCommons/inference/recommendation/dlrm](https://github.com/mlcommons/inference/tree/r1.0/recommendation/dlrm/pytorch#supported-models) repository.
|
||||
If you train the model using the [script provided in model repository](https://github.com/facebookresearch/dlrm/blob/master/dlrm_s_pytorch.py), just add the `--save-onnx` flag to the command line parameters and you'll get the `dlrm_s_pytorch.onnx` file containing the model serialized in ONNX* format.
|
||||
|
||||
**Step 2**. To generate the Intermediate Representation (IR) of the model, change your current working directory to the Model Optimizer installation directory and run the Model Optimizer with the following parameters:
|
||||
```sh
|
||||
mo --input_model dlrm_s_pytorch.onnx
|
||||
```
|
||||
|
||||
Note that Pytorch model uses operation `torch.nn.EmbeddingBag`. This operation converts to onnx as custom `ATen` layer and not directly supported by OpenVINO*, but it is possible to convert this operation to:
|
||||
* `Gather` if each "bag" consists of exactly one index. In this case `offsets` input becomes obsolete and not needed. They will be removed during conversion.
|
||||
* `ExperimentalSparseWeightedSum` if "bags" contain not just one index. In this case Model Optimizer will print warning that pre-process of offsets is needed, because `ExperimentalSparseWeightedSum` and `torch.nn.EmbeddingBag` have different format of inputs.
|
||||
For example if you have `indices` input of shape [indices_shape] and `offsets` input of shape [num_bags] you need to get offsets of shape [indices_shape, 2]. To do that you may use the following code snippet:
|
||||
```python
|
||||
import numpy as np
|
||||
|
||||
new_offsets = np.zeros((indices.shape[-1], 2), dtype=np.int32)
|
||||
new_offsets[:, 1] = np.arange(indices.shape[-1])
|
||||
bag_index = 0
|
||||
for i in range(offsets.shape[-1] - 1):
|
||||
new_offsets[offsets[i]:offsets[i + 1], 0] = bag_index
|
||||
bag_index += 1
|
||||
new_offsets[offsets[-1]:, 0] = bag_index
|
||||
```
|
||||
If you have more than one `torch.nn.EmbeddingBag` operation you'll need to do that for every offset input. If your offsets have same shape they will be merged into one input of shape [num_embedding_bags, indices_shape, 2].
|
@ -1,4 +1,4 @@
|
||||
# Convert ONNX* Faster R-CNN Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN}
|
||||
# Convert ONNX* Faster R-CNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN}
|
||||
|
||||
These instructions are applicable only to the Faster R-CNN model converted to the ONNX* file format from the [facebookresearch/maskrcnn-benchmark model](https://github.com/facebookresearch/maskrcnn-benchmark).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert ONNX* GPT-2 Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2}
|
||||
# Convert ONNX* GPT-2 Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2}
|
||||
|
||||
[Public pre-trained GPT-2 model](https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2) is a large
|
||||
transformer-based language model with a simple objective: predict the next word, given all of the previous words within some text.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert ONNX* Mask R-CNN Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN}
|
||||
# Convert ONNX* Mask R-CNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN}
|
||||
|
||||
These instructions are applicable only to the Mask R-CNN model converted to the ONNX* file format from the [facebookresearch/maskrcnn-benchmark model](https://github.com/facebookresearch/maskrcnn-benchmark).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* BERT-NER to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner}
|
||||
# Convert PyTorch* BERT-NER Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner}
|
||||
|
||||
## Download and Convert the Model to ONNX*
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* F3Net to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net}
|
||||
# Convert PyTorch* F3Net Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net}
|
||||
|
||||
[F3Net](https://github.com/weijun88/F3Net): Fusion, Feedback and Focus for Salient Object Detection
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* QuartzNet to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet}
|
||||
# Convert PyTorch* QuartzNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet}
|
||||
|
||||
[NeMo project](https://github.com/NVIDIA/NeMo) provides the QuartzNet model.
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* RCAN to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN}
|
||||
# Convert PyTorch* RCAN Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN}
|
||||
|
||||
[RCAN](https://github.com/yulunzhang/RCAN): Image Super-Resolution Using Very Deep Residual Channel Attention Networks
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert PyTorch\* RNN-T Model to the Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT}
|
||||
# Convert PyTorch* RNN-T Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT}
|
||||
|
||||
This instruction covers conversion of RNN-T model from [MLCommons](https://github.com/mlcommons) repository. Follow
|
||||
the steps below to export a PyTorch* model into ONNX* before converting it to IR:
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* YOLACT to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT}
|
||||
# Convert PyTorch* YOLACT Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT}
|
||||
|
||||
You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation.
|
||||
The PyTorch\* implementation is publicly available in [this GitHub* repository](https://github.com/dbolya/yolact).
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert TensorFlow* Attention OCR Model to Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_AttentionOCR_From_Tensorflow}
|
||||
# Convert TensorFlow Attention OCR Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_AttentionOCR_From_Tensorflow}
|
||||
|
||||
This tutorial explains how to convert the Attention OCR (AOCR) model from the [TensorFlow* Attention OCR repository](https://github.com/emedvedev/attention-ocr) to the Intermediate Representation (IR).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert TensorFlow* BERT Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_BERT_From_Tensorflow}
|
||||
# Convert TensorFlow BERT Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_BERT_From_Tensorflow}
|
||||
|
||||
Pre-trained models for BERT (Bidirectional Encoder Representations from Transformers) are
|
||||
[publicly available](https://github.com/google-research/bert).
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert CRNN* Models to the Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow}
|
||||
# Convert TensorFlow CRNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow}
|
||||
|
||||
This tutorial explains how to convert a CRNN model to Intermediate Representation (IR).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert TensorFlow* DeepSpeech Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_DeepSpeech_From_Tensorflow}
|
||||
# Convert TensorFlow DeepSpeech Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_DeepSpeech_From_Tensorflow}
|
||||
|
||||
[DeepSpeech project](https://github.com/mozilla/DeepSpeech) provides an engine to train speech-to-text models.
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting EfficientDet Models from TensorFlow {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models}
|
||||
# Convert TensorFlow EfficientDet Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models}
|
||||
|
||||
This tutorial explains how to convert EfficientDet\* public object detection models to the Intermediate Representation (IR).
|
||||
|
||||
@ -91,8 +91,3 @@ The output of the IR is a list of 7-element tuples: `[image_id, class_id, confid
|
||||
* `y_max` -- normalized `y` coordinate of the upper right corner of the detected object.
|
||||
|
||||
The first element with `image_id = -1` means end of data.
|
||||
|
||||
---
|
||||
## See Also
|
||||
|
||||
* [Sub-Graph Replacement in Model Optimizer](../../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert TensorFlow* FaceNet Models to Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow}
|
||||
# Convert TensorFlow FaceNet Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow}
|
||||
|
||||
[Public pre-trained FaceNet models](https://github.com/davidsandberg/facenet#pre-trained-models) contain both training
|
||||
and inference part of graph. Switch between this two states is manageable with placeholder value.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert GNMT* Model to the Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_GNMT_From_Tensorflow}
|
||||
# Convert TensorFlow GNMT Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_GNMT_From_Tensorflow}
|
||||
|
||||
This tutorial explains how to convert Google\* Neural Machine Translation (GNMT) model to the Intermediate Representation (IR).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert Neural Collaborative Filtering Model from TensorFlow* to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_NCF_From_Tensorflow}
|
||||
# Convert TensorFlow Neural Collaborative Filtering Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_NCF_From_Tensorflow}
|
||||
|
||||
This tutorial explains how to convert Neural Collaborative Filtering (NCF) model to Intermediate Representation (IR).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting TensorFlow* Object Detection API Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models}
|
||||
# Convert TensorFlow Object Detection API Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models}
|
||||
|
||||
> **NOTES**:
|
||||
> * Starting with the 2022.1 release, the Model Optimizer can convert the TensorFlow\* Object Detection API Faster and Mask RCNNs topologies differently. By default, the Model Optimizer adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the pre-processing applied to the input image (refer to the [Proposal](../../../../ops/detection/Proposal_4.md) operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model Optimizer can generate IR for such models and insert operation [DetectionOutput](../../../../ops/detection/DetectionOutput_1.md) instead of `Proposal`. The `DetectionOutput` operation does not require additional model input "image_info" and moreover, for some models the produced inference results are closer to the original TensorFlow\* model. In order to trigger new behaviour the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
|
||||
@ -128,7 +128,7 @@ Models with `keep_aspect_ratio_resizer` were trained to recognize object in real
|
||||
|
||||
## Detailed Explanations of Model Conversion Process
|
||||
|
||||
This section is intended for users who want to understand how the Model Optimizer performs Object Detection API models conversion in details. The knowledge given in this section is also useful for users having complex models that are not converted with the Model Optimizer out of the box. It is highly recommended to read [Sub-Graph Replacement in Model Optimizer](../../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md) chapter first to understand sub-graph replacement concepts which are used here.
|
||||
This section is intended for users who want to understand how the Model Optimizer performs Object Detection API models conversion in details. The knowledge given in this section is also useful for users having complex models that are not converted with the Model Optimizer out of the box. It is highly recommended to read the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](../../customize_model_optimizer/Customize_Model_Optimizer.md) documentation first to understand sub-graph replacement concepts which are used here.
|
||||
|
||||
It is also important to open the model in the [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) to see the topology structure. Model Optimizer can create an event file that can be then fed to the TensorBoard* tool. Run the Model Optimizer with providing two command line parameters:
|
||||
* `--input_model <path_to_frozen.pb>` --- Path to the frozen model
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting RetinaNet Model from TensorFlow* to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_RetinaNet_From_Tensorflow}
|
||||
# Converting TensorFlow RetinaNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_RetinaNet_From_Tensorflow}
|
||||
|
||||
This tutorial explains how to convert RetinaNet model to the Intermediate Representation (IR).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting TensorFlow*-Slim Image Classification Model Library Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models}
|
||||
# Convert TensorFlow Slim Image Classification Model Library Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models}
|
||||
|
||||
<a href="https://github.com/tensorflow/models/tree/master/research/slim/README.md">TensorFlow\*-Slim Image Classification Model Library</a> is a library to define, train and evaluate classification models in TensorFlow\*. The library contains Python scripts defining the classification topologies together with checkpoint files for several pre-trained classification topologies. To convert a TensorFlow\*-Slim library model, complete the following steps:
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting TensorFlow* Wide and Deep Family Models to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_WideAndDeep_Family_Models}
|
||||
# Convert TensorFlow Wide and Deep Family Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_WideAndDeep_Family_Models}
|
||||
|
||||
The Wide and Deep models is a combination of wide and deep parts for memorization and generalization of object features respectively.
|
||||
These models can contain different types of object features such as numerical, categorical, sparse and sequential features. These feature types are specified
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Convert TensorFlow* XLNet Model to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_XLNet_From_Tensorflow}
|
||||
# Convert TensorFlow XLNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_XLNet_From_Tensorflow}
|
||||
|
||||
Pre-trained models for XLNet (Bidirectional Encoder Representations from Transformers) are
|
||||
[publicly available](https://github.com/zihangdai/xlnet).
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting YOLO* Models to the Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow}
|
||||
# Convert TensorFlow YOLO Models {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow}
|
||||
|
||||
This document explains how to convert real-time object detection YOLOv1\*, YOLOv2\*, YOLOv3\* and YOLOv4\* public models to the Intermediate Representation (IR). All YOLO\* models are originally implemented in the DarkNet\* framework and consist of two files:
|
||||
* `.cfg` file with model configurations
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Converting TensorFlow* Language Model on One Billion Word Benchmark to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_lm_1b_From_Tensorflow}
|
||||
# Convert TensorFlow Language Model on One Billion Word Benchmark {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_lm_1b_From_Tensorflow}
|
||||
|
||||
## Download the Pre-trained Language Model on One Billion Word Benchmark
|
||||
|
||||
|
@ -1,4 +0,0 @@
|
||||
# [DEPRECATED] Sub-Graph Replacement in the Model Optimizer {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Subgraph_Replacement_Model_Optimizer}
|
||||
|
||||
The document has been deprecated. Refer to the [Model Optimizer Extensibility](Customize_Model_Optimizer.md)
|
||||
for the up-to-date documentation.
|
@ -16,12 +16,17 @@
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Intermediate Representation and Operations Sets
|
||||
:caption: Developer Documentation
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_IR_and_opsets
|
||||
openvino_docs_ops_opset
|
||||
openvino_docs_ops_broadcast_rules
|
||||
openvino_docs_operations_specifications
|
||||
openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_IR_suitable_for_INT8_inference
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
|
||||
|
||||
.. toctree::
|
||||
@ -39,14 +44,14 @@ This section includes a variety of reference information in three broad categori
|
||||
### Additional Resources
|
||||
[Release Notes](https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html) contains change logs and notes for each OpenVINO release.
|
||||
|
||||
[Supported Devices](OV_Runtime_UG/supported_plugins/Supported_Devices.md) is compatibility information about supported hardware accelerators.
|
||||
[Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md) is compatibility information about supported hardware accelerators.
|
||||
|
||||
[Legal Information](Legal_Information.md) has trademark information and other legal statements.
|
||||
[Legal Information](../Legal_Information.md) has trademark information and other legal statements.
|
||||
|
||||
### Intermediate Representation and Operations Sets
|
||||
[Available Operation Sets](ops/opset.md) is a list of supported operations and explanation of supported capabilities.
|
||||
[Available Operation Sets](../ops/opset.md) is a list of supported operations and explanation of supported capabilities.
|
||||
|
||||
[Broadcast Rules for Elementwise Operations](ops/broadcast_rules.md) explains the rules used for to support an arbitrary number of dimensions in neural nets.
|
||||
[Broadcast Rules for Elementwise Operations](../ops/broadcast_rules.md) explains the rules used for to support an arbitrary number of dimensions in neural nets.
|
||||
|
||||
### Case Studies
|
||||
Links to [articles](https://www.intel.com/openvino-success-stories) about real-world examples of OpenVINO™ usage.
|
@ -7,7 +7,7 @@ Currently, there are two groups of optimization methods that can change the IR a
|
||||
- **Quantization**. The rest of this document is dedicated to the representation of quantized models.
|
||||
|
||||
## Representation of quantized models
|
||||
The OpenVINO Toolkit represents all the quantized models using the so-called [FakeQuantize](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Legacy_IR_Layers_Catalog_Spec.html#fakequantize-layer) operation. This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization/dequantization process which happens at runtime. The figure below shows a part of the DL model, namely the Convolutional layer, that undergoes various transformations on way from being a floating-point model to an integer model executed in the OpenVINO runtime. Column 2 of this figure below shows a model quantized with [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf).
|
||||
The OpenVINO Toolkit represents all the quantized models using the so-called [FakeQuantize](../../../docs/ops/quantization/FakeQuantize_1.md) operation. This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization/dequantization process which happens at runtime. The figure below shows a part of the DL model, namely the Convolutional layer, that undergoes various transformations on way from being a floating-point model to an integer model executed in the OpenVINO runtime. Column 2 of this figure below shows a model quantized with [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf).
|
||||

|
||||
|
||||
To reduce memory footprint weights of quantized models are transformed to a target data type, e.g. in the case of 8-bit quantization, this is int8. During this transformation, the floating-point weights tensor and one of the FakeQuantize operations that correspond to it are replaced with 8-bit weight tensor and the sequence of Convert, Subtract, Multiply operations that represent the typecast and dequantization parameters (scale and zero-point) as it is shown in column 3 of the figure.
|
||||
|
Loading…
Reference in New Issue
Block a user