Files
openvino/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Alina Alborova 9ba4db7eae [Cherrypick] Remove the deprecation notice (#2408)
* cherry-pick 1/2

* Removed deprecation notice
2020-09-30 20:32:25 +03:00

7.7 KiB

Model Optimizer Developer Guide

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:

Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:

  • .xml - Describes the network topology

  • .bin - Contains the weights and biases binary data.

What's New in the Model Optimizer in this Release?

  • Common changes:

    • Implemented generation of a compressed OpenVINO IR suitable for INT8 inference, which takes up to 4 times less disk space than an expanded one. Use the --disable_weights_compression Model Optimizer command-line parameter to get an expanded version.
    • Implemented an optimization transformation to replace a sub-graph with the Erf operation into the GeLU operation.
    • Implemented an optimization transformation to replace an upsamping pattern that is represented as a sequence of Split and Concat operations to a single Interpolate operation.
    • Fixed a number of Model Optimizer bugs to generate reshape-able IRs of many models with the command line parameter --keep_shape_ops.
    • Fixed a number of Model Optimizer transformations to set operations name in an IR equal to the original framework model operation name.
    • The following operations are no longer generated with version="opset1": MVN, ROIPooling, ReorgYolo. They became a part of new opset2 operation set and generated with version="opset2". Before this fix, the operations were generated with version="opset1" by mistake, they were not a part of opset1 nGraph namespace; opset1 specification was fixed accordingly.
  • ONNX*:

    • Added support for the following operations: MeanVarianceNormalization if normalization is performed over spatial dimensions.
  • TensorFlow*:

    • Added support for the TensorFlow Object Detection models version 1.15.X.
    • Added support for the following operations: BatchToSpaceND, SpaceToBatchND, Floor.
  • MXNet*:

    • Added support for the following operations:
      • Reshape with input shape values equal to -2, -3, and -4.

NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.

Table of Content

Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit