* Doc Migration from Gitlab (#1289) * doc migration * fix * Update FakeQuantize_1.md * Update performance_benchmarks.md * Updates graphs for FPGA * Update performance_benchmarks.md * Change DL Workbench structure (#1) * Changed DL Workbench structure * Fixed tags * fixes * Update ie_docs.xml * Update performance_benchmarks_faq.md * Fixes in DL Workbench layout * Fixes for CVS-31290 * [DL Workbench] Minor correction * Fix for CVS-30955 * Added nGraph deprecation notice as requested by Zoe * fix broken links in api doxy layouts * CVS-31131 fixes * Additional fixes * Fixed POT TOC * Update PAC_Configure.md PAC DCP 1.2.1 install guide. * Update inference_engine_intro.md * fix broken link * Update opset.md * fix * added opset4 to layout * added new opsets to layout, set labels for them * Update VisionAcceleratorFPGA_Configure.md Updated from 2020.3 to 2020.4 Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
2.1 KiB
Converting a Model to Intermediate Representation (IR)
Use the mo.py script from the <INSTALL_DIR>/deployment_tools/model_optimizer directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR).
The simplest way to convert a model is to run mo.py with a path to the input model file:
python3 mo.py --input_model INPUT_MODEL
Note
: Some models require using additional arguments to specify conversion parameters, such as
--scale,--scale_values,--mean_values,--mean_file. To learn about when you need to use these parameters, refer to Converting a Model Using General Conversion Parameters.
The mo.py script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:
.caffemodel- Caffe* models.pb- TensorFlow* models.params- MXNet* models.onnx- ONNX* models.nnet- Kaldi* models.
If the model files do not have standard extensions, you can use the --framework {tf,caffe,kaldi,onnx,mxnet} option to specify the framework type explicitly.
For example, the following commands are equivalent:
python3 mo.py --input_model /user/models/model.pb
python3 mo.py --framework tf --input_model /user/models/model.pb
To adjust the conversion process, you may use general parameters defined in the Converting a Model Using General Conversion Parameters and Framework-specific parameters for:
- Caffe,
- TensorFlow,
- MXNet,
- ONNX,
- Kaldi.