Files
openvino/docs/MO_DG/prepare_model/convert_model/Converting_Model.md
mei, yang 74dd61b3b3 add document Convert_Model_From_Paddle.md (#6976)
* add document Convert_Model_From_Paddle.md

* To fix syntax error in Convert_Model_From_Paddle.md

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2021-08-17 09:29:27 +03:00

2.2 KiB

Converting a Model to Intermediate Representation (IR)

Use the mo.py script from the <INSTALL_DIR>/deployment_tools/model_optimizer directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR). The simplest way to convert a model is to run mo.py with a path to the input model file and an output directory where you have write permissions:

python3 mo.py --input_model INPUT_MODEL --output_dir <OUTPUT_MODEL_DIR>

Note

: Some models require using additional arguments to specify conversion parameters, such as --scale, --scale_values, --mean_values, --mean_file. To learn about when you need to use these parameters, refer to Converting a Model Using General Conversion Parameters.

The mo.py script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:

  • .caffemodel - Caffe* models
  • .pb - TensorFlow* models
  • .params - MXNet* models
  • .onnx - ONNX* models
  • .nnet - Kaldi* models.

If the model files do not have standard extensions, you can use the --framework {tf,caffe,kaldi,onnx,mxnet,paddle} option to specify the framework type explicitly.

For example, the following commands are equivalent:

python3 mo.py --input_model /user/models/model.pb
python3 mo.py --framework tf --input_model /user/models/model.pb

To adjust the conversion process, you may use general parameters defined in the Converting a Model Using General Conversion Parameters and Framework-specific parameters for:

See Also