MO dev guide refactoring (#3266) (#3595)

* Release mo dev guide refactoring (#3266)

* Updated MO extension guide

* Minor change and adding svg images

* Added additional information about operation extractors. Fixed links and markdown issues

* Added missing file with information about Caffe Python layers and image for MO transformations dependencies graph

* Added section with common graph transformations attributes and diagram with anchor transformations. Added list of available front phase transformations

* Added description of front-phase transformations except the scope-defined and points defined. Removed legacy document and examples for such transformations.

* Added sections about node name pattern defined front phase transformations. Copy-pasted the old one for the points defined front transformation

* Added description of the rest of front transformations and and all middle and back phase transformations

* Refactored Legacy_Mode_for_Caffe_Custom_Layers and updated the Customize_Model_Optimizer with information about extractors order

* Added TOC for the MO Dev guide document and updated SVG images with PNG ones

* Fixed broken link. Removed redundant image

* Fixed broken links

* Added information about attributes 'run_not_recursively', 'force_clean_up' and 'force_shape_inference' of the transformation

* Code review comments

* Added a section about `Port`s

* Extended Ports description with examples

* Added information about Connections

* Updated MO README.md and removed a lot of redundant and misleading information

* Updates to the Customize_Model_Optimizer.md

* More updates to the Customize_Model_Optimizer.md

* Final updates for the Customize_Model_Optimizer.md

* Fixed some broken links

* More fixed links

* Refactored Custom Layers Guide: removed legacy and incorrect text, added up-to-date.

* Draft implementation of the Custom layer guide example for the MO part

* Fixed broken links using #. Change layer->operation in extensibility documents

* Updated Custom operation guide with IE part

* Fixed broken links and minor updates to the Custom Operations Guide

* Updating links

* Layer->Operation

* Moved FFTOp implementation to the template extension

* Update the CMake for template_extension to build the FFT op conditionally

* Fixed template extension compilation

* Fixed CMake for template extension

* Fixed broken snippet

* Added mri_demo script and updated documentation

* One more compilation error fix

* Added missing header for a demo file

* Added reference to OpenCV

* Fixed unit test for the template extension

* Fixed typos in the template extension

* Fixed compilation of template extension for case when ONNX importer is disabled

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
This commit is contained in:
Evgeny Lazarev
2021-01-14 16:28:53 +03:00
committed by GitHub
parent 08afa4fd97
commit dbad8809bf
46 changed files with 2391 additions and 2081 deletions

View File

@@ -1,48 +1,21 @@
## Project structure
Project structure:
<pre>
|-- root
|-- extensions
|-- front/caffe
|-- CustomLayersMapping.xml.example - example of file for registering custom Caffe layers in 2017R3 public
manner
|-- mo
|-- back - Back-End logic: contains IR emitting logic
|-- front - Front-End logic: contains matching between Framework-specific layers and IR specific, calculation
of output shapes for each registered layer
|-- graph - Graph utilities to work with internal IR representation
|-- middle - Graph transformations - optimizations of the model
|-- pipeline - Sequence of steps required to create IR for each framework
|-- utils - Utility functions
|-- tf_call_ie_layer - Sources for TensorFlow fallback in Inference Engine during model inference
|-- mo.py - Centralized entry point that can be used for any supported framework
|-- mo_caffe.py - Entry point particularly for Caffe
|-- mo_mxnet.py - Entry point particularly for MXNet
|-- mo_tf.py - Entry point particularly for TensorFlow
|-- ModelOptimizer - Entry point particularly for Caffe that contains same CLI as 2017R3 publicly released
Model Optimizer
</pre>
## Prerequisites
Model Optimizer requires:
1. Python 3 or newer
2. [Optional] Please read about use cases that require Caffe available on the machine (:doc:`caffe_dependency`).
Please follow the steps described (:doc:`caffe_build`).
2. [Optional] Please read about use cases that require Caffe\* to be available on the machine in the documentation.
## Installation instructions
1. Go to the Model Optimizer folder:
<pre>
cd PATH_TO_INSTALL_DIR/deployment_tools/model_optimizer/model_optimizer_tensorflow
cd PATH_TO_INSTALL_DIR/deployment_tools/model_optimizer
</pre>
2. Create virtual environment and activate it. This option is strongly recommended as it creates a Python sandbox and
dependencies for Model Optimizer do not influence global Python configuration, installed libraries etc. At the same
time, special flag ensures that system-wide Python libraries are also available in this sandbox. Skip this
dependencies for the Model Optimizer do not influence global Python configuration, installed libraries etc. At the
same time, special flag ensures that system-wide Python libraries are also available in this sandbox. Skip this
step only if you do want to install all Model Optimizer dependencies globally:
* Create environment:
@@ -54,110 +27,28 @@ Model Optimizer requires:
. .env3/bin/activate
</pre>
3. Install dependencies. If you want to convert models only from particular framework, you should use one of
available <code>requirements_*.txt</code> files corresponding to the framework of choice. For example, for Caffe use
<code>requirements_caffe.txt</code> and so on. When you decide to switch later to other frameworks, please install dependencies
for them using the same mechanism:
available <code>requirements_\*.txt</code> files corresponding to the framework of choice. For example, for Caffe
use <code>requirements_caffe.txt</code> and so on. When you decide to switch later to other frameworks, please
install dependencies for them using the same mechanism:
<pre>
pip3 install -r requirements.txt
</pre>
pip3 install -r requirements.txt
</pre>
Or you can use the installation scripts from the "install_prerequisites" directory.
4. [OPTIONAL] If you use Windows OS, most probably you get python version of `protobuf` library. It is known to be rather slow,
and you can use a boosted version of library by building the .egg file (Python package format) yourself,
using instructions below (section 'How to boost Caffe model loading') for the target OS and Python, or install it
with the pre-built .egg (it is built for Python 3.4, 3.5, 3.6, 3.7):
<pre>
<pre>
python3 -m easy_install protobuf-3.6.1-py3.6-win-amd64.egg
</pre>
</pre>
It overrides the protobuf python package installed by the previous command.
Set environment variable to enable boost in protobuf performance:
<pre>
<pre>
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
</pre>
## Command-Line Interface (CLI)
The following short examples are framework-dependent. Please read the complete help
with --help option for details across all frameworks:
<pre>
python3 mo.py --help
</pre>
There are several scripts that convert a model:
1. <code>mo.py</code> -- universal entry point that can convert a model from any supported framework
2. <code>mo_caffe.py</code> -- dedicated script for Caffe models conversion
3. <code>mo_mxnet.py</code> -- dedicated script for MXNet models conversion
4. <code>mo_tf.py</code> -- dedicated script for TensorFlow models conversion
5. <code>mo_onnx.py</code> -- dedicated script for ONNX models conversion
6. <code>mo_kaldi.py</code> -- dedicated script for Kaldi models conversion
<code>mo.py</code> can deduce original framework where input model was trained by an extension of
the model file. Or <code>--framework</code> option can be used for this purpose if model files
don't have standard extensions (<code>.pb</code> - for TensorFlow models, <code>.params</code> - for MXNet models,
<code>.caffemodel</code> - for Caffe models). So, the following commands are equivalent::
<pre>
python3 mo.py --input_model /user/models/model.pb
python3 mo.py --framework tf --input_model /user/models/model.pb
</pre>
The following examples illustrate the shortest command lines to convert a model per
framework.
### Convert TensorFlow model
To convert a frozen TensorFlow model contained in binary file <code>model-file.pb</code>, run
dedicated entry point <code>mo_tf.py</code>:
python3 mo_tf.py --input_model model-file.pb
### Convert Caffe model
To convert a Caffe model contained in <code>model-file.prototxt</code> and <code>model-file.caffemodel</code> run
dedicated entry point <code>mo_caffe.py</code>:
<pre>
python3 mo_caffe.py --input_model model-file.caffemodel
</pre>
### Convert MXNet model
To Convert an MXNet model in <code>model-file-symbol.json</code> and <code>model-file-0000.params</code> run
dedicated entry point <code>mo_mxnet.py</code>:
<pre>
python3 mo_mxnet.py --input_model model-file
</pre>
> **NOTE**: for TensorFlow* all Placeholder ops are represented as Input layers in the final IR.
### Convert ONNX* model
The Model Optimizer assumes that you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
Use the mo_onnx.py script to simply convert a model with the path to the input model .onnx file:
<pre>
python3 mo_onnx.py --input_model model-file.onnx
</pre>
Input channels re-ordering, scaling, subtraction of mean values and other preprocessing features
are not applied by default. To pass necessary values to Model Optimizer, please run <code>mo.py</code>
(or <code>mo_tf.py</code>, <code>mo_caffe.py</code>, <code>mo_mxnet.py</code>) with <code>--help</code> and
examine all available options.
## Working with Inference Engine
To the moment, Inference Engine is the only consumer of IR models that Model Optimizer produces.
The whole workflow and more documentation on the structure of IR are documented in the Developer Guide
of Inference Engine. Note that sections about running Model Optimizer refer to the old version
of the tool and can not be applied to the current version of Model Optimizer.
</pre>
## Setup development environment
@@ -185,14 +76,6 @@ of the tool and can not be applied to the current version of Model Optimizer.
1. Run the following command:
<pre>
pylint mo/ mo.py
pylint mo/ extensions/ mo.py
</pre>
### How to check requirements dependencies
1. Run the following command:
<pre>
cat requirements_file | docker run -i --rm pyupio/safety safety check --stdin
</pre>
> **NOTE**: here <code>requirements_file</code> is one of the following: <code>requirements.txt</code>, <code>requirements_caffe.txt</code>, <code>requirements_tf.txt</code>, <code>requirements_tf2.txt</code>, <code>requirements_mxnet.txt</code>, <code>requirements_dev.txt</code>.