* Edits to MO Per findings spreadsheet * macOS changes per issue spreadsheet * Fixes from review spreadsheet Mostly IE_DG fixes * Consistency changes * Make doc fixes from last round of review * Add GSG build-all details * Fix links to samples and demos pages * Make MO_DG v2 changes * Add image view step to classify demo * Put MO dependency with others * Edit docs per issues spreadsheet * Add file to pytorch_specific * More fixes per spreadsheet * Prototype sample page * Add build section * Update README.md * Batch download/convert by default * Add detail to How It Works * Minor change * Temporary restored topics * corrected layout * Resized * Added white background into the picture * fixed link to omz_tools_downloader * fixed title in the layout Co-authored-by: baychub <cbay@yahoo.com> Co-authored-by: baychub <31420038+baychub@users.noreply.github.com>
2.8 KiB
Model Optimizer Developer Guide
Introduction
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the Inference Engine.
Note
: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
The IR is a pair of files describing the model:
-
.xml- Describes the network topology -
.bin- Contains the weights and biases binary data.
Below is a simple command running Model Optimizer to generate an IR for the input model:
python3 mo.py --input_model INPUT_MODEL
To learn about all Model Optimizer parameters and conversion technics, see the Converting a Model to IR page.
Tip
: You can quick start with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref openvino_docs_get_started_get_started_dl_workbench) (DL Workbench). [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is the OpenVINO™ toolkit UI that enables you to import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for deployment on various Intel® platforms.
Videos
| Model Optimizer Concept. Duration: 3:56 |
Model Optimizer Basic Operation. Duration: 2:57. |
Choosing the Right Precision. Duration: 4:18. |
