DOCS shift to rst Supported Model Formats (#16657)
* add model intro doc * add supported model formats page * add TF doc * add pytorch doc * add paddle doc * add mxnet doc * add caffe doc * add kaldi doc * fix format * fix cide snippets * fix code snippets * fix kaldi doc * kaldi code snippets * fix format * fix list * directive test * fix note * move code block * code snippets style
This commit is contained in:
parent
392b67f082
commit
961a99586a
@ -9,22 +9,23 @@
|
||||
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
|
||||
omz_tools_downloader
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's :doc:`Open Model Zoo <model_zoo>`.
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
||||
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
|
||||
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
|
||||
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
|
||||
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by :doc:`alternating input shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`embedding preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` and :doc:`cutting training parts off <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`.
|
||||
|
||||
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||
|
||||
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
|
||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
|
||||
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
|
||||
|
||||
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).
|
||||
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
|
||||
* :doc:`Convert different model formats to the OpenVINO IR format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
* `Automate model-related tasks with Model Downloader and additional OMZ Tools <https://docs.openvino.ai/latest/omz_tools_downloader.html>`__.
|
||||
|
||||
To begin with, you may want to :doc:`browse a database of models for use in your projects <model_zoo>`.
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,85 +1,99 @@
|
||||
# Converting a Caffe Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe}
|
||||
|
||||
<a name="Convert_From_Caffe"></a>To convert a Caffe model, run Model Optimizer with the path to the input model `.caffemodel` file:
|
||||
@sphinxdirective
|
||||
|
||||
To convert a Caffe model, run Model Optimizer with the path to the input model ``.caffemodel`` file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model <INPUT_MODEL>.caffemodel
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.caffemodel
|
||||
```
|
||||
|
||||
The following list provides the Caffe-specific parameters.
|
||||
|
||||
```
|
||||
Caffe-specific parameters:
|
||||
--input_proto INPUT_PROTO, -d INPUT_PROTO
|
||||
Deploy-ready prototxt file that contains a topology
|
||||
structure and layer attributes
|
||||
--caffe_parser_path CAFFE_PARSER_PATH
|
||||
Path to python Caffe parser generated from caffe.proto
|
||||
-k K Path to CustomLayersMapping.xml to register custom
|
||||
layers
|
||||
--disable_omitting_optional
|
||||
Disable omitting optional attributes to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
all attributes of a custom layer to IR. Default
|
||||
behavior is to transfer the attributes with default
|
||||
values and the attributes defined by the user to IR.
|
||||
--enable_flattening_nested_params
|
||||
Enable flattening optional params to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
attributes of a custom layer to IR with flattened
|
||||
nested parameters. Default behavior is to transfer the
|
||||
attributes without flattening nested parameters.
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
### CLI Examples Using Caffe-Specific Parameters
|
||||
Caffe-specific parameters:
|
||||
--input_proto INPUT_PROTO, -d INPUT_PROTO
|
||||
Deploy-ready prototxt file that contains a topology
|
||||
structure and layer attributes
|
||||
--caffe_parser_path CAFFE_PARSER_PATH
|
||||
Path to python Caffe parser generated from caffe.proto
|
||||
-k K Path to CustomLayersMapping.xml to register custom
|
||||
layers
|
||||
--disable_omitting_optional
|
||||
Disable omitting optional attributes to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
all attributes of a custom layer to IR. Default
|
||||
behavior is to transfer the attributes with default
|
||||
values and the attributes defined by the user to IR.
|
||||
--enable_flattening_nested_params
|
||||
Enable flattening optional params to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
attributes of a custom layer to IR with flattened
|
||||
nested parameters. Default behavior is to transfer the
|
||||
attributes without flattening nested parameters.
|
||||
|
||||
* Launching Model Optimizer for [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `prototxt` file.
|
||||
This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
|
||||
```
|
||||
* Launching Model Optimizer for [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `CustomLayersMapping` file.
|
||||
This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer.
|
||||
Example of `CustomLayersMapping.xml` can be found in `<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example`. The optional parameters without default values and not specified by the user in the `.prototxt` file are removed from the Intermediate Representation, and nested parameters are flattened:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
|
||||
```
|
||||
This example shows a multi-input model with input layers: `data`, `rois`
|
||||
```
|
||||
layer {
|
||||
name: "data"
|
||||
type: "Input"
|
||||
top: "data"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 3 dim: 224 dim: 224 }
|
||||
}
|
||||
}
|
||||
layer {
|
||||
name: "rois"
|
||||
type: "Input"
|
||||
top: "rois"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 5 dim: 1 dim: 1 }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Launching the Model Optimizer for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the Model Optimizer. In particular, for data, set the shape to `1,3,227,227`. For rois, set the shape to `1,6,1,1`:
|
||||
```sh
|
||||
mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),[1,6,1,1]
|
||||
```
|
||||
## Custom Layer Definition
|
||||
CLI Examples Using Caffe-Specific Parameters
|
||||
++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
* Launching Model Optimizer for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `prototxt` file. This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
|
||||
|
||||
* Launching Model Optimizer for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer. Example of ``CustomLayersMapping.xml`` can be found in ``<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example``. The optional parameters without default values and not specified by the user in the ``.prototxt`` file are removed from the Intermediate Representation, and nested parameters are flattened:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
|
||||
|
||||
This example shows a multi-input model with input layers: ``data``, ``rois``
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
layer {
|
||||
name: "data"
|
||||
type: "Input"
|
||||
top: "data"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 3 dim: 224 dim: 224 }
|
||||
}
|
||||
}
|
||||
layer {
|
||||
name: "rois"
|
||||
type: "Input"
|
||||
top: "rois"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 5 dim: 1 dim: 1 }
|
||||
}
|
||||
}
|
||||
|
||||
* Launching the Model Optimizer for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the Model Optimizer. In particular, for data, set the shape to ``1,3,227,227``. For rois, set the shape to ``1,6,1,1``:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),[1,6,1,1]
|
||||
|
||||
Custom Layer Definition
|
||||
########################
|
||||
|
||||
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
|
||||
|
||||
## Supported Caffe Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
Supported Caffe Layers
|
||||
#######################
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
## Summary
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
|
||||
Summary
|
||||
#######
|
||||
|
||||
In this document, you learned:
|
||||
|
||||
@ -87,5 +101,10 @@ In this document, you learned:
|
||||
* Which Caffe models are supported.
|
||||
* How to convert a trained Caffe model by using Model Optimizer with both framework-agnostic and Caffe-specific command-line options.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific Caffe models.
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific Caffe models.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,65 +1,86 @@
|
||||
# Converting a Kaldi Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi}
|
||||
|
||||
> **NOTE**: Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. The support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.
|
||||
@sphinxdirective
|
||||
|
||||
.. note::
|
||||
|
||||
Model Optimizer supports the `nnet1 <http://kaldi-asr.org/doc/dnn1.html>`__ and `nnet2 <http://kaldi-asr.org/doc/dnn2.html>`__ formats of Kaldi models. The support of the `nnet3 <http://kaldi-asr.org/doc/dnn3.html>`__ format is limited.
|
||||
|
||||
<a name="Convert_From_Kaldi"></a>To convert a Kaldi model, run Model Optimizer with the path to the input model `.nnet` or `.mdl` file:
|
||||
To convert a Kaldi model, run Model Optimizer with the path to the input model ``.nnet`` or ``.mdl`` file:
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.nnet
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
## Using Kaldi-Specific Conversion Parameters <a name="kaldi_specific_conversion_params"></a>
|
||||
mo --input_model <INPUT_MODEL>.nnet
|
||||
|
||||
Using Kaldi-Specific Conversion Parameters
|
||||
##########################################
|
||||
|
||||
The following list provides the Kaldi-specific parameters.
|
||||
|
||||
```sh
|
||||
Kaldi-specific parameters:
|
||||
--counts COUNTS A file name with full path to the counts file or empty string to utilize count values from the model file
|
||||
--remove_output_softmax
|
||||
Removes the Softmax that is the output layer
|
||||
--remove_memory Remove the Memory layer and add new inputs and outputs instead
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
## Examples of CLI Commands
|
||||
Kaldi-specific parameters:
|
||||
--counts COUNTS A file name with full path to the counts file or empty string to utilize count values from the model file
|
||||
--remove_output_softmax
|
||||
Removes the Softmax that is the output layer
|
||||
--remove_memory Remove the Memory layer and add new inputs and outputs instead
|
||||
|
||||
* To launch Model Optimizer for the `wsj_dnn5b_smbr` model with the specified `.nnet` file:
|
||||
```sh
|
||||
mo --input_model wsj_dnn5b_smbr.nnet
|
||||
```
|
||||
Examples of CLI Commands
|
||||
########################
|
||||
|
||||
* To launch Model Optimizer for the `wsj_dnn5b_smbr` model with the existing file that contains counts for the last layer with biases:
|
||||
```sh
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts
|
||||
```
|
||||
* To launch Model Optimizer for the ``wsj_dnn5b_smbr`` model with the specified ``.nnet`` file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model wsj_dnn5b_smbr.nnet
|
||||
|
||||
* To launch Model Optimizer for the ``wsj_dnn5b_smbr`` model with the existing file that contains counts for the last layer with biases:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts
|
||||
|
||||
|
||||
* The Model Optimizer normalizes сounts in the following way:
|
||||
\f[
|
||||
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
|
||||
\f]
|
||||
\f[
|
||||
C_{i}=log(S*C_{i})
|
||||
\f]
|
||||
where \f$C\f$ - the counts array, \f$C_{i} - i^{th}\f$ element of the counts array,
|
||||
\f$|C|\f$ - number of elements in the counts array;
|
||||
|
||||
.. math::
|
||||
|
||||
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
|
||||
|
||||
.. math::
|
||||
|
||||
C_{i}=log(S\*C_{i})
|
||||
|
||||
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;
|
||||
|
||||
* The normalized counts are subtracted from biases of the last or next to last layer (if last layer is SoftMax).
|
||||
|
||||
.. note:: Model Optimizer will show a warning if a model contains values of counts and the `--counts` option is not used.
|
||||
|
||||
> **NOTE**: Model Optimizer will show a warning if a model contains values of counts and the `--counts` option is not used.
|
||||
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the `--remove_output_softmax` flag:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the
|
||||
`--remove_output_softmax` flag:
|
||||
```sh
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
|
||||
```
|
||||
|
||||
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||
|
||||
> **NOTE**: Model Optimizer can remove SoftMax layer only if the topology has one output.
|
||||
.. note:: Model Optimizer can remove SoftMax layer only if the topology has one output.
|
||||
|
||||
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the `--output` option.
|
||||
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``--output`` option.
|
||||
|
||||
## Supported Kaldi Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers ](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
Supported Kaldi Layers
|
||||
######################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific Kaldi models. Here are some examples:
|
||||
|
||||
* :doc:`Convert Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model <openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model>`
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific Kaldi models. Here are some examples:
|
||||
* [Convert Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model)
|
||||
|
@ -1,51 +1,61 @@
|
||||
# Converting an MXNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet}
|
||||
|
||||
<a name="ConvertMxNet"></a>To convert an MXNet model, run Model Optimizer with the path to the *`.params`* file of the input model:
|
||||
@sphinxdirective
|
||||
|
||||
```sh
|
||||
mo --input_model model-file-0000.params
|
||||
```
|
||||
To convert an MXNet model, run Model Optimizer with the path to the ``.params`` file of the input model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model model-file-0000.params
|
||||
|
||||
|
||||
Using MXNet-Specific Conversion Parameters
|
||||
##########################################
|
||||
|
||||
## Using MXNet-Specific Conversion Parameters <a name="mxnet_specific_conversion_params"></a>
|
||||
The following list provides the MXNet-specific parameters.
|
||||
|
||||
```
|
||||
MXNet-specific parameters:
|
||||
--input_symbol <SYMBOL_FILE_NAME>
|
||||
Symbol file (for example, "model-symbol.json") that contains a topology structure and layer attributes
|
||||
--nd_prefix_name <ND_PREFIX_NAME>
|
||||
Prefix name for args.nd and argx.nd files
|
||||
--pretrained_model_name <PRETRAINED_MODEL_NAME>
|
||||
Name of a pre-trained MXNet model without extension and epoch
|
||||
number. This model will be merged with args.nd and argx.nd
|
||||
files
|
||||
--save_params_from_nd
|
||||
Enable saving built parameters file from .nd files
|
||||
--legacy_mxnet_model
|
||||
Enable Apache MXNet loader to make a model compatible with the latest Apache MXNet version.
|
||||
Use only if your model was trained with Apache MXNet version lower than 1.0.0
|
||||
--enable_ssd_gluoncv
|
||||
Enable transformation for converting the gluoncv ssd topologies.
|
||||
Use only if your topology is one of ssd gluoncv topologies
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
> **NOTE**: By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest
|
||||
> version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the
|
||||
> `--legacy_mxnet_model` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually
|
||||
> recompile Apache MXNet with custom layers and install it in your environment.
|
||||
MXNet-specific parameters:
|
||||
--input_symbol <SYMBOL_FILE_NAME>
|
||||
Symbol file (for example, "model-symbol.json") that contains a topology structure and layer attributes
|
||||
--nd_prefix_name <ND_PREFIX_NAME>
|
||||
Prefix name for args.nd and argx.nd files
|
||||
--pretrained_model_name <PRETRAINED_MODEL_NAME>
|
||||
Name of a pre-trained MXNet model without extension and epoch
|
||||
number. This model will be merged with args.nd and argx.nd
|
||||
files
|
||||
--save_params_from_nd
|
||||
Enable saving built parameters file from .nd files
|
||||
--legacy_mxnet_model
|
||||
Enable Apache MXNet loader to make a model compatible with the latest Apache MXNet version.
|
||||
Use only if your model was trained with Apache MXNet version lower than 1.0.0
|
||||
--enable_ssd_gluoncv
|
||||
Enable transformation for converting the gluoncv ssd topologies.
|
||||
Use only if your topology is one of ssd gluoncv topologies
|
||||
|
||||
## Custom Layer Definition
|
||||
|
||||
.. note::
|
||||
|
||||
By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the ``--legacy_mxnet_model`` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually recompile Apache MXNet with custom layers and install it in your environment.
|
||||
|
||||
Custom Layer Definition
|
||||
#######################
|
||||
|
||||
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
|
||||
|
||||
## Supported MXNet Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
Supported MXNet Layers
|
||||
#######################
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
## Summary
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
|
||||
Summary
|
||||
########
|
||||
|
||||
In this document, you learned:
|
||||
|
||||
@ -53,7 +63,13 @@ In this document, you learned:
|
||||
* Which MXNet models are supported.
|
||||
* How to convert a trained MXNet model by using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific MXNet models. Here are some examples:
|
||||
* [Convert MXNet GluonCV Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models)
|
||||
* [Convert MXNet Style Transfer Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet)
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific MXNet models. Here are some examples:
|
||||
|
||||
* :doc:`Convert MXNet GluonCV Model <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models>`
|
||||
* :doc:`Convert MXNet Style Transfer Model <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
@ -1,28 +1,40 @@
|
||||
# Converting an ONNX Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX}
|
||||
|
||||
## Introduction to ONNX
|
||||
[ONNX](https://github.com/onnx/onnx) is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
|
||||
@sphinxdirective
|
||||
|
||||
## Converting an ONNX Model <a name="Convert_From_ONNX"></a>
|
||||
Introduction to ONNX
|
||||
####################
|
||||
|
||||
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](@ref openvino_docs_install_guides_install_dev_tools).
|
||||
`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
|
||||
|
||||
Converting an ONNX Model
|
||||
########################
|
||||
|
||||
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
|
||||
|
||||
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
|
||||
|
||||
To convert an ONNX model, run Model Optimizer with the path to the input model `.onnx` file:
|
||||
To convert an ONNX model, run Model Optimizer with the path to the input model ``.onnx`` file:
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.onnx
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the [Converting a Model to Intermediate Representation (IR)](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
|
||||
mo --input_model <INPUT_MODEL>.onnx
|
||||
|
||||
## Supported ONNX Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
|
||||
* [Convert ONNX Faster R-CNN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN)
|
||||
* [Convert ONNX GPT-2 Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2)
|
||||
* [Convert ONNX Mask R-CNN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN)
|
||||
Supported ONNX Layers
|
||||
#####################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
|
||||
|
||||
* :doc:`Convert ONNX Faster R-CNN Model <openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN>`
|
||||
* :doc:`Convert ONNX GPT-2 Model <openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2>`
|
||||
* :doc:`Convert ONNX Mask R-CNN Model <openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
@ -1,23 +1,29 @@
|
||||
# Converting a PaddlePaddle Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
|
||||
|
||||
To convert a PaddlePaddle model, use the `mo` script and specify the path to the input `.pdmodel` model file:
|
||||
@sphinxdirective
|
||||
|
||||
To convert a PaddlePaddle model, use the ``mo`` script and specify the path to the input ``.pdmodel`` model file:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <INPUT_MODEL>.pdmodel
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.pdmodel
|
||||
```
|
||||
**For example,** this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network:
|
||||
|
||||
```sh
|
||||
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
## Supported PaddlePaddle Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
|
||||
|
||||
Supported PaddlePaddle Layers
|
||||
#############################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Officially Supported PaddlePaddle Models
|
||||
########################################
|
||||
|
||||
## Officially Supported PaddlePaddle Models
|
||||
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
|
||||
|
||||
@sphinxdirective
|
||||
.. list-table::
|
||||
:widths: 20 25 55
|
||||
:header-rows: 1
|
||||
@ -67,10 +73,16 @@ The following PaddlePaddle models have been officially validated and confirmed t
|
||||
* - BERT
|
||||
- language representation
|
||||
- Models are exported from `PaddleNLP <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_. Refer to `README.md <https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme>`_.
|
||||
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ), which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.
|
||||
|
@ -1,39 +1,49 @@
|
||||
# Converting a PyTorch Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
The PyTorch framework is supported through export to the ONNX format. In order to optimize and deploy a model that was trained with it:
|
||||
|
||||
1. [Export a PyTorch model to ONNX](#export-to-onnx).
|
||||
2. [Convert the ONNX model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation](@ref openvino_docs_MO_DG_IR_and_opsets) of the model based on the trained network topology, weights, and biases values.
|
||||
1. `Export a PyTorch model to ONNX <#Exporting-a-PyTorch-Model-to-ONNX-Format>`__.
|
||||
2. :doc:`Convert the ONNX model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` to produce an optimized :doc:`Intermediate Representation <openvino_docs_MO_DG_IR_and_opsets>` of the model based on the trained network topology, weights, and biases values.
|
||||
|
||||
## Exporting a PyTorch Model to ONNX Format <a name="export-to-onnx"></a>
|
||||
PyTorch models are defined in Python. To export them, use the `torch.onnx.export()` method. The code to
|
||||
Exporting a PyTorch Model to ONNX Format
|
||||
########################################
|
||||
|
||||
PyTorch models are defined in Python. To export them, use the ``torch.onnx.export()`` method. The code to
|
||||
evaluate or test the model is usually provided with its code and can be used for its initialization and export.
|
||||
The export to ONNX is crucial for this process, but it is covered by PyTorch framework, therefore, It will not be covered here in detail.
|
||||
For more information, refer to the [Exporting PyTorch models to ONNX format](https://pytorch.org/docs/stable/onnx.html) guide.
|
||||
For more information, refer to the `Exporting PyTorch models to ONNX format <https://pytorch.org/docs/stable/onnx.html>`__ guide.
|
||||
|
||||
To export a PyTorch model, you need to obtain the model as an instance of `torch.nn.Module` class and call the `export` function.
|
||||
To export a PyTorch model, you need to obtain the model as an instance of ``torch.nn.Module`` class and call the ``export`` function.
|
||||
|
||||
```python
|
||||
import torch
|
||||
.. code-block:: python
|
||||
|
||||
# Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps.
|
||||
model = SomeModel()
|
||||
# Evaluate the model to switch some operations from training mode to inference.
|
||||
model.eval()
|
||||
# Create dummy input for the model. It will be used to run the model inside export function.
|
||||
dummy_input = torch.randn(1, 3, 224, 224)
|
||||
# Call the export function
|
||||
torch.onnx.export(model, (dummy_input, ), 'model.onnx')
|
||||
```
|
||||
import torch
|
||||
|
||||
## Known Issues
|
||||
# Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps.
|
||||
model = SomeModel()
|
||||
# Evaluate the model to switch some operations from training mode to inference.
|
||||
model.eval()
|
||||
# Create dummy input for the model. It will be used to run the model inside export function.
|
||||
dummy_input = torch.randn(1, 3, 224, 224)
|
||||
# Call the export function
|
||||
torch.onnx.export(model, (dummy_input, ), 'model.onnx')
|
||||
|
||||
* As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
|
||||
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version`
|
||||
option of the `torch.onnx.export`. For more information about ONNX opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md) page.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific PyTorch models. Here are some examples:
|
||||
* [Convert PyTorch BERT-NER Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner)
|
||||
* [Convert PyTorch RCAN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN)
|
||||
* [Convert PyTorch YOLACT Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT)
|
||||
Known Issues
|
||||
############
|
||||
|
||||
As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
|
||||
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use ``opset_version`` option of the ``torch.onnx.export``. For more information about ONNX opset, refer to the `Operator Schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`__ page.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific PyTorch models. Here are some examples:
|
||||
|
||||
* :doc:`Convert PyTorch BERT-NER Model <openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner>`
|
||||
* :doc:`Convert PyTorch RCAN Model <openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN>`
|
||||
* :doc:`Convert PyTorch YOLACT Model <openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,151 +1,186 @@
|
||||
# Converting a TensorFlow Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
This page provides general instructions on how to convert a model from a TensorFlow format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
|
||||
|
||||
To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](@ref openvino_docs_install_guides_install_dev_tools).
|
||||
To use Model Optimizer, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
|
||||
|
||||
## Converting TensorFlow 1 Models <a name="Convert_From_TF1X"></a>
|
||||
Converting TensorFlow 1 Models
|
||||
###############################
|
||||
|
||||
### Converting Frozen Model Format <a name="Convert_From_TF"></a>
|
||||
To convert a TensorFlow model, use the *`mo`* script to simply convert a model with a path to the input model *`.pb`* file:
|
||||
Converting Frozen Model Format
|
||||
+++++++++++++++++++++++++++++++
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.pb
|
||||
```
|
||||
To convert a TensorFlow model, use the ``*mo*`` script to simply convert a model with a path to the input model ``*.pb*`` file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model <INPUT_MODEL>.pb
|
||||
|
||||
|
||||
Converting Non-Frozen Model Formats
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
|
||||
### Converting Non-Frozen Model Formats <a name="loading-nonfrozen-models"></a>
|
||||
There are three ways to store non-frozen TensorFlow models and convert them by Model Optimizer:
|
||||
|
||||
1. **Checkpoint**. In this case, a model consists of two files: `inference_graph.pb` (or `inference_graph.pbtxt`) and `checkpoint_file.ckpt`.
|
||||
If you do not have an inference graph file, refer to the [Freezing Custom Models in Python](#freeze-the-tensorflow-model) section.
|
||||
To convert the model with the inference graph in `.pb` format, run the `mo` script with a path to the checkpoint file:
|
||||
```sh
|
||||
mo --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
|
||||
```
|
||||
To convert the model with the inference graph in `.pbtxt` format, run the `mo` script with a path to the checkpoint file:
|
||||
```sh
|
||||
mo --input_model <INFERENCE_GRAPH>.pbtxt --input_checkpoint <INPUT_CHECKPOINT> --input_model_is_text
|
||||
```
|
||||
1. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``.
|
||||
If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section.
|
||||
To convert the model with the inference graph in ``.pb`` format, run the `mo` script with a path to the checkpoint file:
|
||||
|
||||
2. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: `model_name.meta`, `model_name.index`,
|
||||
`model_name.data-00000-of-00001` (the numbers may vary), and `checkpoint` (optional).
|
||||
To convert such TensorFlow model, run the `mo` script with a path to the MetaGraph `.meta` file:
|
||||
```sh
|
||||
mo --input_meta_graph <INPUT_META_GRAPH>.meta
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
|
||||
|
||||
To convert the model with the inference graph in ``.pbtxt`` format, run the ``mo`` script with a path to the checkpoint file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model <INFERENCE_GRAPH>.pbtxt --input_checkpoint <INPUT_CHECKPOINT> --input_model_is_text
|
||||
|
||||
|
||||
2. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: ``model_name.meta``, ``model_name.index``,
|
||||
``model_name.data-00000-of-00001`` (the numbers may vary), and ``checkpoint`` (optional).
|
||||
To convert such TensorFlow model, run the `mo` script with a path to the MetaGraph ``.meta`` file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_meta_graph <INPUT_META_GRAPH>.meta
|
||||
|
||||
|
||||
3. **SavedModel format**. In this case, a model consists of a special directory with a ``.pb`` file
|
||||
and several subfolders: ``variables``, ``assets``, and ``assets.extra``. For more information about the SavedModel directory, refer to the `README <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/saved_model#components>`__ file in the TensorFlow repository.
|
||||
To convert such TensorFlow model, run the ``mo`` script with a path to the SavedModel directory:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
|
||||
|
||||
3. **SavedModel format**. In this case, a model consists of a special directory with a `.pb` file
|
||||
and several subfolders: `variables`, `assets`, and `assets.extra`. For more information about the SavedModel directory, refer to the [README](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/saved_model#components) file in the TensorFlow repository.
|
||||
To convert such TensorFlow model, run the `mo` script with a path to the SavedModel directory:
|
||||
```sh
|
||||
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
|
||||
```
|
||||
|
||||
You can convert TensorFlow 1.x SavedModel format in the environment that has a 1.x or 2.x version of TensorFlow. However, TensorFlow 2.x SavedModel format strictly requires the 2.x version of TensorFlow.
|
||||
If a model contains operations currently unsupported by OpenVINO, prune these operations by explicit specification of input nodes using the `--input` option.
|
||||
To determine custom input nodes, display a graph of the model in TensorBoard. To generate TensorBoard logs of the graph, use the `--tensorboard_logs` option.
|
||||
TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the `StatefulPartitionedCall/*` subgraph of TensorFlow 2.x SavedModel format.
|
||||
If a model contains operations currently unsupported by OpenVINO, prune these operations by explicit specification of input nodes using the ``--input`` option.
|
||||
To determine custom input nodes, display a graph of the model in TensorBoard. To generate TensorBoard logs of the graph, use the ``--tensorboard_logs`` option.
|
||||
TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph of TensorFlow 2.x SavedModel format.
|
||||
|
||||
Freezing Custom Models in Python
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
### Freezing Custom Models in Python <a name="freeze-the-tensorflow-model"></a>
|
||||
When a network is defined in Python code, you have to create an inference graph file. Graphs are usually built in a form
|
||||
that allows model training. That means all trainable parameters are represented as variables in the graph.
|
||||
To be able to use such graph with Model Optimizer, it should be frozen and dumped to a file with the following code:
|
||||
|
||||
```python
|
||||
import tensorflow as tf
|
||||
from tensorflow.python.framework import graph_io
|
||||
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
|
||||
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
|
||||
```
|
||||
.. code-block:: python
|
||||
|
||||
import tensorflow as tf
|
||||
from tensorflow.python.framework import graph_io
|
||||
frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
|
||||
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
|
||||
|
||||
Where:
|
||||
|
||||
* `sess` is the instance of the TensorFlow Session object where the network topology is defined.
|
||||
* `["name_of_the_output_node"]` is the list of output node names in the graph; `frozen` graph will
|
||||
include only those nodes from the original `sess.graph_def` that are directly or indirectly used
|
||||
to compute given output nodes. The `'name_of_the_output_node'` is an example of a possible output
|
||||
node name. You should derive the names based on your own graph.
|
||||
* `./` is the directory where the inference graph file should be generated.
|
||||
* `inference_graph.pb` is the name of the generated inference graph file.
|
||||
* `as_text` specifies whether the generated file should be in human readable text format or binary.
|
||||
* ``sess`` is the instance of the TensorFlow Session object where the network topology is defined.
|
||||
* ``["name_of_the_output_node"]`` is the list of output node names in the graph; ``frozen`` graph will include only those nodes from the original ``sess.graph_def`` that are directly or indirectly used to compute given output nodes. The ``'name_of_the_output_node'`` is an example of a possible output node name. You should derive the names based on your own graph.
|
||||
* ``./`` is the directory where the inference graph file should be generated.
|
||||
* ``inference_graph.pb`` is the name of the generated inference graph file.
|
||||
* ``as_text`` specifies whether the generated file should be in human readable text format or binary.
|
||||
|
||||
Converting TensorFlow 2 Models
|
||||
###############################
|
||||
|
||||
## Converting TensorFlow 2 Models <a name="Convert_From_TF2X"></a>
|
||||
To convert TensorFlow 2 models, ensure that `openvino-dev[tensorflow2]` is installed via `pip`.
|
||||
TensorFlow 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5).
|
||||
Below are the instructions on how to convert each of them.
|
||||
|
||||
### SavedModel Format
|
||||
A model in the SavedModel format consists of a directory with a `saved_model.pb` file and two subfolders: `variables` and `assets`.
|
||||
SavedModel Format
|
||||
+++++++++++++++++
|
||||
|
||||
A model in the SavedModel format consists of a directory with a ``saved_model.pb`` file and two subfolders: ``variables`` and ``assets``.
|
||||
To convert such a model, run the `mo` script with a path to the SavedModel directory:
|
||||
|
||||
```sh
|
||||
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
|
||||
|
||||
TensorFlow 2 SavedModel format strictly requires the 2.x version of TensorFlow installed in the
|
||||
environment for conversion to the Intermediate Representation (IR).
|
||||
|
||||
If a model contains operations currently unsupported by OpenVINO™,
|
||||
prune these operations by explicit specification of input nodes using the `--input` or `--output`
|
||||
prune these operations by explicit specification of input nodes using the ``--input`` or ``--output``
|
||||
options. To determine custom input nodes, visualize a model graph in the TensorBoard.
|
||||
|
||||
To generate TensorBoard logs of the graph, use the Model Optimizer `--tensorboard_logs` command-line
|
||||
To generate TensorBoard logs of the graph, use the Model Optimizer ``--tensorboard_logs`` command-line
|
||||
option.
|
||||
|
||||
TensorFlow 2 SavedModel format has a specific graph structure due to eager execution. In case of
|
||||
pruning, find custom input nodes in the `StatefulPartitionedCall/*` subgraph.
|
||||
pruning, find custom input nodes in the ``StatefulPartitionedCall/*`` subgraph.
|
||||
|
||||
Keras H5
|
||||
++++++++
|
||||
|
||||
### Keras H5
|
||||
If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the
|
||||
SavedModel format. Here is an example of how to do it:
|
||||
|
||||
```python
|
||||
import tensorflow as tf
|
||||
model = tf.keras.models.load_model('model.h5')
|
||||
tf.saved_model.save(model,'model')
|
||||
```
|
||||
.. code-block:: python
|
||||
|
||||
import tensorflow as tf
|
||||
model = tf.keras.models.load_model('model.h5')
|
||||
tf.saved_model.save(model,'model')
|
||||
|
||||
|
||||
The Keras H5 model with a custom layer has specifics to be converted into SavedModel format.
|
||||
For example, the model with a custom layer `CustomLayer` from `custom_layer.py` is converted as follows:
|
||||
For example, the model with a custom layer ``CustomLayer`` from ``custom_layer.py`` is converted as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import tensorflow as tf
|
||||
from custom_layer import CustomLayer
|
||||
model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})
|
||||
tf.saved_model.save(model,'model')
|
||||
|
||||
```python
|
||||
import tensorflow as tf
|
||||
from custom_layer import CustomLayer
|
||||
model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})
|
||||
tf.saved_model.save(model,'model')
|
||||
```
|
||||
|
||||
Then follow the above instructions for the SavedModel format.
|
||||
|
||||
> **NOTE**: Do not use other hacks to resave TensorFlow 2 models into TensorFlow 1 formats.
|
||||
.. note::
|
||||
|
||||
Do not use other hacks to resave TensorFlow 2 models into TensorFlow 1 formats.
|
||||
|
||||
Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters
|
||||
##########################################################################
|
||||
|
||||
## Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters
|
||||
* Launching the Model Optimizer for Inception V1 frozen model when model file is a plain text protobuf:
|
||||
|
||||
```sh
|
||||
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
* Launching the Model Optimizer for Inception V1 frozen model and dump information about the graph to TensorBoard log dir `/tmp/log_dir`
|
||||
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
|
||||
|
||||
```sh
|
||||
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
|
||||
```
|
||||
|
||||
* Launching the Model Optimizer for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes
|
||||
where the batch size and the sequence length equal 2 and 30 respectively.
|
||||
* Launching the Model Optimizer for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir``
|
||||
|
||||
```sh
|
||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
## Supported TensorFlow and TensorFlow 2 Keras Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers ](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
The Model Optimizer provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ). The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
## Summary
|
||||
* Launching the Model Optimizer for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||
|
||||
|
||||
Supported TensorFlow and TensorFlow 2 Keras Layers
|
||||
##################################################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
The Model Optimizer provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`. The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
Summary
|
||||
#######
|
||||
|
||||
In this document, you learned:
|
||||
|
||||
* Basic information about how the Model Optimizer works with TensorFlow models.
|
||||
@ -153,9 +188,13 @@ In this document, you learned:
|
||||
* How to freeze a TensorFlow model.
|
||||
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific TensorFlow models. Here are some examples:
|
||||
* [Convert TensorFlow EfficientDet Models](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models)
|
||||
* [Convert TensorFlow FaceNet Models](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow)
|
||||
* [Convert TensorFlow Object Detection API Models](@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models)
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific TensorFlow models. Here are some examples:
|
||||
|
||||
* :doc:`Convert TensorFlow EfficientDet Models <openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models>`
|
||||
* :doc:`Convert TensorFlow FaceNet Models <openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_FaceNet_From_Tensorflow>`
|
||||
* :doc:`Convert TensorFlow Object Detection API Models <openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -15,22 +15,22 @@
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
|
||||
|
||||
**ONNX, PaddlePaddle, TensorFlow** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to [Integrate OpenVINO™ with Your Application](../../../OV_Runtime_UG/integrate_with_your_application.md).
|
||||
**ONNX, PaddlePaddle, TensorFlow** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
|
||||
|
||||
**MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
|
||||
|
||||
Refer to the following articles for details on conversion for different formats and models:
|
||||
|
||||
* [How to convert ONNX](./Convert_Model_From_ONNX.md)
|
||||
* [How to convert PaddlePaddle](./Convert_Model_From_Paddle.md)
|
||||
* [How to convert TensorFlow](./Convert_Model_From_TensorFlow.md)
|
||||
* [How to convert MXNet](./Convert_Model_From_MxNet.md)
|
||||
* [How to convert Caffe](./Convert_Model_From_Caffe.md)
|
||||
* [How to convert Kaldi](./Convert_Model_From_Kaldi.md)
|
||||
* :doc:`How to convert ONNX <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
|
||||
* :doc:`How to convert PaddlePaddle <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
|
||||
* :doc:`How to convert TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
|
||||
* :doc:`How to convert MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
|
||||
* :doc:`How to convert Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
|
||||
* :doc:`How to convert Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
|
||||
|
||||
* [Conversion examples for specific models](./Convert_Model_Tutorials.md)
|
||||
* :doc:`Conversion examples for specific models <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
Loading…
Reference in New Issue
Block a user