8.5 KiB
Model Optimizer Developer Guide
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
-
.xml- Describes the network topology -
.bin- Contains the weights and biases binary data.
What's New in the Model Optimizer in this Release?
- Common changes:
- Implemented several optimization transformations to replace sub-graphs of operations with HSwish, Mish, Swish and SoftPlus operations.
- Model Optimizer generates IR keeping shape-calculating sub-graphs by default. Previously, this behavior was triggered if the "--keep_shape_ops" command line parameter was provided. The key is ignored in this release and will be deleted in the next release. To trigger the legacy behavior to generate an IR for a fixed input shape (folding ShapeOf operations and shape-calculating sub-graphs to Constant), use the "--static_shape" command line parameter. Changing model input shape using the Inference Engine API in runtime may fail for such an IR.
- Fixed Model Optimizer conversion issues resulted in non-reshapeable IR using the Inference Engine reshape API.
- Enabled transformations to fix non-reshapeable patterns in the original networks:
- Hardcoded Reshape
- In Reshape(2D)->MatMul pattern
- Reshape->Transpose->Reshape when the pattern can be fused to the ShuffleChannels or DepthToSpace operation
- Hardcoded Interpolate
- In Interpolate->Concat pattern
- Added a dedicated requirements file for TensorFlow 2.X as well as the dedicated install prerequisites scripts.
- Replaced the SparseToDense operation with ScatterNDUpdate-4.
- Hardcoded Reshape
- ONNX*:
- Enabled an ability to specify the model output tensor name using the "--output" command line parameter.
- Added support for the following operations:
- Acosh
- Asinh
- Atanh
- DepthToSpace-11, 13
- DequantizeLinear-10 (zero_point must be constant)
- HardSigmoid-1,6
- QuantizeLinear-10 (zero_point must be constant)
- ReduceL1-11, 13
- ReduceL2-11, 13
- Resize-11, 13 (except mode="nearest" with 5D+ input, mode="tf_crop_and_resize", and attributes exclude_outside and extrapolation_value with non-zero values)
- ScatterND-11, 13
- SpaceToDepth-11, 13
- TensorFlow*:
- Added support for the following operations:
- Acosh
- Asinh
- Atanh
- CTCLoss
- EuclideanNorm
- ExtractImagePatches
- FloorDiv
- Added support for the following operations:
- MXNet*:
- Added support for the following operations:
- Acosh
- Asinh
- Atanh
- Added support for the following operations:
- Kaldi*:
- Fixed bug with ParallelComponent support. Now it is fully supported with no restrictions.
NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.
Table of Content
-
Preparing and Optimizing your Trained Model with Model Optimizer
-
Converting a Model to Intermediate Representation (IR)
- Converting a Model Using General Conversion Parameters
- Converting Your Caffe* Model
- Converting Your TensorFlow* Model
- Converting Your MXNet* Model
- Converting Your Kaldi* Model
- Converting Your ONNX* Model
- Model Optimizations Techniques
- Cutting parts of the model
- Sub-graph Replacement in Model Optimizer
- Supported Framework Layers
- Intermediate Representation and Operation Sets
- Operations Specification
- Intermediate Representation suitable for INT8 inference
Typical Next Step: Preparing and Optimizing your Trained Model with Model Optimizer
