Feature/azaytsev/mo devguide changes (#6405)

* MO devguide edits

* MO devguide edits

* MO devguide edits

* MO devguide edits

* MO devguide edits

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Additional edits

* Additional edits

* Updated the workflow diagram

* Minor fix

* Experimenting with videos

* Updated the workflow diagram

* Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer

* Rolled back

* Revert "Rolled back"

This reverts commit 6a4a3e1765.

* Revert "Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer"

This reverts commit 0810bd534f.

* Fixed ie_docs.xml, Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer

* Fixed ie_docs.xml

* Minor fix

* <details> tag issue

* <details> tag issue

* Fix <details> tag issue

* Fix <details> tag issue

* Fix <details> tag issue
This commit is contained in:
Andrey Zaytsev
2021-06-29 03:59:24 +03:00
committed by GitHub
parent 4833c8db72
commit c2e8c3bd92
8 changed files with 56 additions and 218 deletions

View File

@@ -1,135 +1,50 @@
# Model Optimizer Developer Guide {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
## Introduction
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
> **NOTE**: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
![](img/workflow_steps.png)
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
The IR is a pair of files describing the model:
* <code>.xml</code> - Describes the network topology
* <code>.bin</code> - Contains the weights and biases binary data.
Below is a simple command running Model Optimizer to generate an IR for the input model:
```sh
python3 mo.py --input_model INPUT_MODEL
```
To learn about all Model Optimizer parameters and conversion technics, see the [Converting a Model to IR](prepare_model/convert_model/Converting_Model.md) page.
> **TIP**: You can quick start with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref
> openvino_docs_get_started_get_started_dl_workbench) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is the OpenVINO™ toolkit UI that enables you to
> import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for
> deployment on various Intel® platforms.
## What's New in the Model Optimizer in this Release?
## Videos
* Common changes:
* Implemented several optimization transformations to replace sub-graphs of operations with HSwish, Mish, Swish and SoftPlus operations.
* Model Optimizer generates IR keeping shape-calculating sub-graphs **by default**. Previously, this behavior was triggered if the "--keep_shape_ops" command line parameter was provided. The key is ignored in this release and will be deleted in the next release. To trigger the legacy behavior to generate an IR for a fixed input shape (folding ShapeOf operations and shape-calculating sub-graphs to Constant), use the "--static_shape" command line parameter. Changing model input shape using the Inference Engine API in runtime may fail for such an IR.
* Fixed Model Optimizer conversion issues resulted in non-reshapeable IR using the Inference Engine reshape API.
* Enabled transformations to fix non-reshapeable patterns in the original networks:
* Hardcoded Reshape
* In Reshape(2D)->MatMul pattern
* Reshape->Transpose->Reshape when the pattern can be fused to the ShuffleChannels or DepthToSpace operation
* Hardcoded Interpolate
* In Interpolate->Concat pattern
* Added a dedicated requirements file for TensorFlow 2.X as well as the dedicated install prerequisites scripts.
* Replaced the SparseToDense operation with ScatterNDUpdate-4.
* ONNX*:
* Enabled an ability to specify the model output **tensor** name using the "--output" command line parameter.
* Added support for the following operations:
* Acosh
* Asinh
* Atanh
* DepthToSpace-11, 13
* DequantizeLinear-10 (zero_point must be constant)
* HardSigmoid-1,6
* QuantizeLinear-10 (zero_point must be constant)
* ReduceL1-11, 13
* ReduceL2-11, 13
* Resize-11, 13 (except mode="nearest" with 5D+ input, mode="tf_crop_and_resize", and attributes exclude_outside and extrapolation_value with non-zero values)
* ScatterND-11, 13
* SpaceToDepth-11, 13
* TensorFlow*:
* Added support for the following operations:
* Acosh
* Asinh
* Atanh
* CTCLoss
* EuclideanNorm
* ExtractImagePatches
* FloorDiv
* MXNet*:
* Added support for the following operations:
* Acosh
* Asinh
* Atanh
* Kaldi*:
* Fixed bug with ParallelComponent support. Now it is fully supported with no restrictions.
> **NOTE:**
> [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
## Table of Contents
* [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)
* [Configuring Model Optimizer](prepare_model/Config_Model_Optimizer.md)
* [Converting a Model to Intermediate Representation (IR)](prepare_model/convert_model/Converting_Model.md)
* [Converting a Model Using General Conversion Parameters](prepare_model/convert_model/Converting_Model_General.md)
* [Converting Your Caffe* Model](prepare_model/convert_model/Convert_Model_From_Caffe.md)
* [Converting Your TensorFlow* Model](prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
* [Converting BERT from TensorFlow](prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md)
* [Converting GNMT from TensorFlow](prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md)
* [Converting YOLO from DarkNet to TensorFlow and then to IR](prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md)
* [Converting Wide and Deep Models from TensorFlow](prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md)
* [Converting FaceNet from TensorFlow](prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md)
* [Converting DeepSpeech from TensorFlow](prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md)
* [Converting Language Model on One Billion Word Benchmark from TensorFlow](prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md)
* [Converting Neural Collaborative Filtering Model from TensorFlow*](prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md)
* [Converting TensorFlow* Object Detection API Models](prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md)
* [Converting TensorFlow*-Slim Image Classification Model Library Models](prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md)
* [Converting CRNN Model from TensorFlow*](prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md)
* [Converting Your MXNet* Model](prepare_model/convert_model/Convert_Model_From_MxNet.md)
* [Converting a Style Transfer Model from MXNet](prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md)
* [Converting Your Kaldi* Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md)
* [Converting Your ONNX* Model](prepare_model/convert_model/Convert_Model_From_ONNX.md)
* [Converting Faster-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md)
* [Converting Mask-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md)
* [Converting GPT2 ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_GPT2.md)
* [Converting Your PyTorch* Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md)
* [Converting F3Net PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_F3Net.md)
* [Converting QuartzNet PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md)
* [Converting YOLACT PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md)
* [Model Optimizations Techniques](prepare_model/Model_Optimization_Techniques.md)
* [Cutting parts of the model](prepare_model/convert_model/Cutting_Model.md)
* [Sub-graph Replacement in Model Optimizer](prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
* [Supported Framework Layers](prepare_model/Supported_Frameworks_Layers.md)
* [Intermediate Representation and Operation Sets](IR_and_opsets.md)
* [Operations Specification](../ops/opset.md)
* [Intermediate Representation suitable for INT8 inference](prepare_model/convert_model/IR_suitable_for_INT8_inference.md)
* [Model Optimizer Extensibility](prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
* [Extending Model Optimizer with New Primitives](prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md)
* [Extending Model Optimizer with Caffe Python Layers](prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md)
* [Extending Model Optimizer with Custom MXNet* Operations](prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md)
* [Legacy Mode for Caffe* Custom Layers](prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md)
* [Model Optimizer Frequently Asked Questions](prepare_model/Model_Optimizer_FAQ.md)
* [Known Issues](Known_Issues_Limitations.md)
**Typical Next Step:** [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)
## Video: Model Optimizer Concept
[![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<table>
<tr>
<td><iframe width="220" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></td>
<td><iframe width="220" src="https://www.youtube.com/embed/BBt1rseDcy0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></td>
<td><iframe width="220" src="https://www.youtube.com/embed/RF8ypHyiKrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></td>
</tr>
<tr>
<td><strong>Model Optimizer Concept</strong>. <br>Duration: 3:56</td>
<td><strong>Model Optimizer Basic<br> Operation</strong>. <br>Duration: 2:57.</td>
<td><strong>Choosing the Right Precision</strong>. <br>Duration: 4:18.</td>
</tr>
</table>
\endhtmlonly
## Video: Model Optimizer Basic Operation
[![](https://img.youtube.com/vi/BBt1rseDcy0/0.jpg)](https://www.youtube.com/watch?v=BBt1rseDcy0)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/BBt1rseDcy0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Video: Choosing the Right Precision
[![](https://img.youtube.com/vi/RF8ypHyiKrY/0.jpg)](https://www.youtube.com/watch?v=RF8ypHyiKrY)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/RF8ypHyiKrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5e22bc22d614c7335ae461a8ce449ea8695973d755faca718cf74b95972c94e2
size 19773
oid sha256:5281f26cbaa468dc4cafa4ce2fde35d338fe0f658bbb796abaaf793e951939f6
size 13943

View File

@@ -1,8 +1,6 @@
# Configuring the Model Optimizer {#openvino_docs_MO_DG_prepare_model_Config_Model_Optimizer}
# Installing Model Optimizer Pre-Requisites {#openvino_docs_MO_DG_prepare_model_Config_Model_Optimizer}
You must configure the Model Optimizer for the framework that was used to train
the model. This section tells you how to configure the Model Optimizer either
through scripts or by using a manual process.
Before running the Model Optimizer, you must install the Model Optimizer pre-requisites for the framework that was used to train the model. This section tells you how to install the pre-requisites either through scripts or by using a manual process.
## Using Configuration Scripts
@@ -154,6 +152,10 @@ pip3 install -r requirements_onnx.txt
```
## Using the protobuf Library in the Model Optimizer for Caffe\*
\htmlonly<details>\endhtmlonly
<summary>Click to expand</summary>
These procedures require:
@@ -166,7 +168,7 @@ By default, the library executes pure Python\* language implementation,
which is slow. These steps show how to use the faster C++ implementation
of the protobuf library on Windows OS or Linux OS.
### Using the protobuf Library on Linux\* OS
#### Using the protobuf Library on Linux\* OS
To use the C++ implementation of the protobuf library on Linux, it is enough to
set up the environment variable:
@@ -174,7 +176,7 @@ set up the environment variable:
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```
### <a name="protobuf-install-windows"></a>Using the protobuf Library on Windows\* OS
#### <a name="protobuf-install-windows"></a>Using the protobuf Library on Windows\* OS
On Windows, pre-built protobuf packages for Python versions 3.4, 3.5, 3.6,
and 3.7 are provided with the installation package and can be found in
@@ -262,6 +264,10 @@ python3 -m easy_install dist/protobuf-3.6.1-py3.6-win-amd64.egg
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```
\htmlonly
</details>
\endhtmlonly
## See Also
* [Converting a Model to Intermediate Representation (IR)](convert_model/Converting_Model.md)

View File

@@ -1,63 +0,0 @@
# Preparing and Optimizing Your Trained Model {#openvino_docs_MO_DG_prepare_model_Prepare_Trained_Model}
Inference Engine enables _deploying_ your network model trained with any of supported deep learning frameworks: Caffe\*, TensorFlow\*, Kaldi\*, MXNet\* or converted to the ONNX\* format. To perform the inference, the Inference Engine does not operate with the original model, but with its Intermediate Representation (IR), which is optimized for execution on end-point target devices. To generate an IR for your trained model, the Model Optimizer tool is used.
## How the Model Optimizer Works
Model Optimizer loads a model into memory, reads it, builds the internal representation of the model, optimizes it, and produces the Intermediate Representation. Intermediate Representation is the only format the Inference Engine accepts.
> **NOTE**: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.
Model Optimizer has two main purposes:
* **Produce a valid Intermediate Representation**. If this main conversion artifact is not valid, the Inference Engine cannot run. The primary responsibility of the Model Optimizer is to produce the two files (`.xml` and `.bin`) that form the Intermediate Representation.
* **Produce an optimized Intermediate Representation**. Pre-trained models contain layers that are important for training, such as the `Dropout` layer. These layers are useless during inference and might increase the inference time. In many cases, these operations can be automatically removed from the resulting Intermediate Representation. However, if a group of operations can be represented as a single mathematical operation, and thus as a single operation node in a model graph, the Model Optimizer recognizes such patterns and replaces this group of operation nodes with the only one operation. The result is an Intermediate Representation that has fewer operation nodes than the original model. This decreases the inference time.
To produce a valid Intermediate Representation, the Model Optimizer must be able to read the original model operations, handle their properties and represent them in Intermediate Representation format, while maintaining validity of the resulting Intermediate Representation. The resulting model consists of operations described in the [Operations Specification](../../ops/opset.md).
## What You Need to Know about Your Model
Many common layers exist across known frameworks and neural network topologies. Examples of these layers are `Convolution`, `Pooling`, and `Activation`. To read the original model and produce the Intermediate Representation of a model, the Model Optimizer must be able to work with these layers.
The full list of them depends on the framework and can be found in the [Supported Framework Layers](Supported_Frameworks_Layers.md) section. If your topology contains only layers from the list of layers, as is the case for the topologies used by most users, the Model Optimizer easily creates the Intermediate Representation. After that you can proceed to work with the Inference Engine.
However, if you use a topology with layers that are not recognized by the Model Optimizer out of the box, see [Custom Layers in the Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md) to learn how to work with custom layers.
## Model Optimizer Directory Structure
After installation with OpenVINO&trade; toolkit or Intel&reg; Deep Learning Deployment Toolkit, the Model Optimizer folder has the following structure (some directories omitted for clarity):
```
|-- model_optimizer
|-- extensions
|-- front - Front-End framework agnostic transformations (operations output shapes are not defined yet).
|-- caffe - Front-End Caffe-specific transformations and Caffe layers extractors
|-- CustomLayersMapping.xml.example - example of file for registering custom Caffe layers (compatible with the 2017R3 release)
|-- kaldi - Front-End Kaldi-specific transformations and Kaldi operations extractors
|-- mxnet - Front-End MxNet-specific transformations and MxNet symbols extractors
|-- onnx - Front-End ONNX-specific transformations and ONNX operators extractors
|-- tf - Front-End TensorFlow-specific transformations, TensorFlow operations extractors, sub-graph replacements configuration files.
|-- middle - Middle-End framework agnostic transformations (layers output shapes are defined).
|-- back - Back-End framework agnostic transformations (preparation for IR generation).
|-- mo
|-- back - Back-End logic: contains IR emitting logic
|-- front - Front-End logic: contains matching between Framework-specific layers and IR specific, calculation of output shapes for each registered layer
|-- graph - Graph utilities to work with internal IR representation
|-- middle - Graph transformations - optimizations of the model
|-- pipeline - Sequence of steps required to create IR for each framework
|-- utils - Utility functions
|-- tf_call_ie_layer - Source code that enables TensorFlow fallback in Inference Engine during model inference
|-- mo.py - Centralized entry point that can be used for any supported framework
|-- mo_caffe.py - Entry point particularly for Caffe
|-- mo_kaldi.py - Entry point particularly for Kaldi
|-- mo_mxnet.py - Entry point particularly for MXNet
|-- mo_onnx.py - Entry point particularly for ONNX
|-- mo_tf.py - Entry point particularly for TensorFlow
```
The following sections provide the information about how to use the Model Optimizer, from configuring the tool and generating an IR for a given model to customizing the tool for your needs:
* [Configuring Model Optimizer](Config_Model_Optimizer.md)
* [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md)
* [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md)
* [Model Optimization Techniques](Model_Optimization_Techniques.md)
* [Model Optimizer Frequently Asked Questions](Model_Optimizer_FAQ.md)

View File

@@ -1,38 +1,20 @@
# Converting a Model to Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
Use the <code>mo.py</code> script from the `<INSTALL_DIR>/deployment_tools/model_optimizer` directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR).
The simplest way to convert a model is to run <code>mo.py</code> with a path to the input model file and an output directory where you have write permissions:
Use the <code>mo.py</code> script from the `<INSTALL_DIR>/deployment_tools/model_optimizer` directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR):
```sh
python3 mo.py --input_model INPUT_MODEL --output_dir <OUTPUT_MODEL_DIR>
```
You need to have have write permissions for an output directory.
> **NOTE**: Some models require using additional arguments to specify conversion parameters, such as `--scale`, `--scale_values`, `--mean_values`, `--mean_file`. To learn about when you need to use these parameters, refer to [Converting a Model Using General Conversion Parameters](Converting_Model_General.md).
The <code>mo.py</code> script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:
* `.caffemodel` - Caffe\* models
* `.pb` - TensorFlow\* models
* `.params` - MXNet\* models
* `.onnx` - ONNX\* models
* `.nnet` - Kaldi\* models.
If the model files do not have standard extensions, you can use the ``--framework {tf,caffe,kaldi,onnx,mxnet}`` option to specify the framework type explicitly.
For example, the following commands are equivalent:
```sh
python3 mo.py --input_model /user/models/model.pb
```
```sh
python3 mo.py --framework tf --input_model /user/models/model.pb
```
> **NOTE**: Some models require using additional arguments to specify conversion parameters, such as `--input_shape`, `--scale`, `--scale_values`, `--mean_values`, `--mean_file`. To learn about when you need to use these parameters, refer to [Converting a Model Using General Conversion Parameters](Converting_Model_General.md).
To adjust the conversion process, you may use general parameters defined in the [Converting a Model Using General Conversion Parameters](Converting_Model_General.md) and
Framework-specific parameters for:
* [Caffe](Convert_Model_From_Caffe.md),
* [TensorFlow](Convert_Model_From_TensorFlow.md),
* [MXNet](Convert_Model_From_MxNet.md),
* [ONNX](Convert_Model_From_ONNX.md),
* [Kaldi](Convert_Model_From_Kaldi.md).
* [Caffe](Convert_Model_From_Caffe.md)
* [TensorFlow](Convert_Model_From_TensorFlow.md)
* [MXNet](Convert_Model_From_MxNet.md)
* [ONNX](Convert_Model_From_ONNX.md)
* [Kaldi](Convert_Model_From_Kaldi.md)
## See Also

View File

@@ -19,11 +19,10 @@ limitations under the License.
<doxygenlayout xmlns:xi="http://www.w3.org/2001/XInclude" version="1.0">
<!-- Navigation index tabs for HTML output -->
<navindex>
<tab id="converting_and_preparing_models" type="usergroup" title="Converting and Preparing Models" url="">
<tab id="converting_and_preparing_models" type="usergroup" title="Converting and Preparing Models" url="@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide">
<!-- Model Optimizer Developer Guide-->
<tab type="usergroup" title="Model Optimizer Developer Guide" url="@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide">
<tab type="usergroup" title="Preparing and Optimizing Your Trained Model" url="@ref openvino_docs_MO_DG_prepare_model_Prepare_Trained_Model">
<tab type="user" title="Configuring the Model Optimizer" url="@ref openvino_docs_MO_DG_prepare_model_Config_Model_Optimizer"/>
<tab type="user" title="Installing Model Optimizer Pre-Requisites" url="@ref openvino_docs_MO_DG_prepare_model_Config_Model_Optimizer"/>
<tab type="usergroup" title="Converting a Model to Intermediate Representation (IR)" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model">
<tab type="user" title="Converting a Model Using General Conversion Parameters" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General"/>
<tab type="user" title="Converting a Caffe* Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe"/>
@@ -78,7 +77,6 @@ limitations under the License.
<tab type="user" title="Legacy Mode for Caffe* Custom Layers" url="@ref openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Legacy_Mode_for_Caffe_Custom_Layers"/>
<tab type="user" title="[DEPRECATED] Offloading Sub-Graph Inference" url="https://docs.openvinotoolkit.org/2020.1/_docs_MO_DG_prepare_model_customize_model_optimizer_Offloading_Sub_Graph_Inference.html"/>
</tab>
</tab>
<tab type="user" title="Model Optimizer Frequently Asked Questions" url="@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ"/>
<tab type="user" title="Known Issues" url="@ref openvino_docs_MO_DG_Known_Issues_Limitations"/>
</tab>
@@ -366,4 +364,4 @@ limitations under the License.
<tab type="user" title="Inference Engine Plugin Development Guide" url="ie_plugin_api/index.html"/>
</tab>
</navindex>
</doxygenlayout>
</doxygenlayout>

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b630a7deb8bbcf1d5384c351baff7505dc96a1a5d59b5f6786845d549d93d9ab
size 36881
oid sha256:5281f26cbaa468dc4cafa4ce2fde35d338fe0f658bbb796abaaf793e951939f6
size 13943

View File

@@ -87,7 +87,7 @@ Networks training is typically done on high-end data centers, using popular trai
![](../img/workflow_steps.png)
As described in the [Model Optimizer Guide](../MO_DG/prepare_model/Prepare_Trained_Model.md), there are a number of device-agnostic optimizations the tool performs. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions. Generally, these layers should not be manifested in the resulting IR:
As described in the [Model Optimizer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md), there are a number of device-agnostic optimizations the tool performs. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions. Generally, these layers should not be manifested in the resulting IR:
![](../img/resnet_269.png)