Files
openvino/model-optimizer
Pavel Esir 4302e2c120 add preliminary support of Proposal-4 in nGraph (#1448)
renamed logits -> bbox_deltas

updated ngraph unittests for Proposal

removed validate_and_infer_types Proposal-4

removed validate_and_infer_types Proposal-4

changed validate_and_infer_types in parent class of Proposal

removed get_output_size

successfully inferred Proposal on SSH and Faster-RCNN

added unittests for Proposal-4

added unittests for Proposal-4

added unittests for Proposal-4

returned back default namespace for Proposal

reduced number of outputs in v0::Proposal

correct conversion of Proposal-4 -> propodal_ie with 2 outputs

removed creator for proposal v0

removed converter for proposal v0

added Proposal-4 to MO

removed `for_deformable` attribute

added Proposal-4 to MO and nGraph Python API

removed typo in Proposal-4 specification

style corrections

style corrections and removed some redundant code

rename proposal Python api test

removed 'attrs' context from visitor

returned back AttrVisitor to check if passes OpenVINO ONNX pipeline

Should pass OpenVINO ONNX pipeline (returned back AttrVisitor just to check)

python api for Proposal-4 works ok

(style correction) python api for Proposal-4 works ok

parametrized proposal_ie some other corrections

removed 'attrs.' context from nGraph Python API tests for Proposal

minor corrections in replacer proposal->proposal_ie

corrected Python API OpenVINO-ONNX tests should pass

Improved workaround for AttributeVisitor for Proposal

Add additional check of im_info tensor shape to Proposal node in MKLDNNPlugin

😠 removed 4 extra spaces from test_dyn_attributes.py to match The Style

added new nGraph RTTI declarations, removed throwing exception in transformation

added new nGraph RTTI declarations, removed throwing exception in transformation, corrected exception in MKLDNNplugin

corrected im_info size checking in Proposal node of MKLDNNPlugin
2020-08-16 15:49:49 +03:00
..
2020-08-13 13:20:29 +03:00
2020-05-06 23:38:42 +03:00
2020-04-15 21:46:27 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00

Project structure

Project structure:

    |-- root
        |-- extensions
            |-- front/caffe
                |-- CustomLayersMapping.xml.example - example of file for registering custom Caffe layers in 2017R3 public
                manner
        |-- mo
            |-- back - Back-End logic: contains IR emitting logic
            |-- front - Front-End logic: contains matching between Framework-specific layers and IR specific, calculation
            of output shapes for each registered layer
            |-- graph - Graph utilities to work with internal IR representation
            |-- middle - Graph transformations - optimizations of the model
            |-- pipeline - Sequence of steps required to create IR for each framework
            |-- utils - Utility functions
        |-- tf_call_ie_layer - Sources for TensorFlow fallback in Inference Engine during model inference
        |-- mo.py - Centralized entry point that can be used for any supported framework
        |-- mo_caffe.py - Entry point particularly for Caffe
        |-- mo_mxnet.py - Entry point particularly for MXNet
        |-- mo_tf.py - Entry point particularly for TensorFlow
        |-- ModelOptimizer - Entry point particularly for Caffe that contains same CLI as 2017R3 publicly released
        Model Optimizer

Prerequisites

Model Optimizer requires:

  1. Python 3 or newer

  2. [Optional] Please read about use cases that require Caffe available on the machine (:doc:caffe_dependency). Please follow the steps described (:doc:caffe_build).

Installation instructions

  1. Go to the Model Optimizer folder:
    cd PATH_TO_INSTALL_DIR/deployment_tools/model_optimizer/model_optimizer_tensorflow
  1. Create virtual environment and activate it. This option is strongly recommended as it creates a Python sandbox and dependencies for Model Optimizer do not influence global Python configuration, installed libraries etc. At the same time, special flag ensures that system-wide Python libraries are also available in this sandbox. Skip this step only if you do want to install all Model Optimizer dependencies globally:

    • Create environment:
          virtualenv -p /usr/bin/python3.6 .env3 --system-site-packages
        
    • Activate it:
        . .env3/bin/activate
      
  2. Install dependencies. If you want to convert models only from particular framework, you should use one of available requirements_*.txt files corresponding to the framework of choice. For example, for Caffe use requirements_caffe.txt and so on. When you decide to switch later to other frameworks, please install dependencies for them using the same mechanism:

     pip3 install -r requirements.txt
     
  3. [OPTIONAL] If you use Windows OS, most probably you get python version of protobuf library. It is known to be rather slow, and you can use a boosted version of library by building the .egg file (Python package format) yourself, using instructions below (section 'How to boost Caffe model loading') for the target OS and Python, or install it with the pre-built .egg (it is built for Python 3.4, 3.5, 3.6, 3.7):

         python3 -m easy_install protobuf-3.6.1-py3.6-win-amd64.egg
     

    It overrides the protobuf python package installed by the previous command.

    Set environment variable to enable boost in protobuf performance:

         set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
     

Command-Line Interface (CLI)

The following short examples are framework-dependent. Please read the complete help with --help option for details across all frameworks:

    python3 mo.py --help

There are several scripts that convert a model:

  1. mo.py -- universal entry point that can convert a model from any supported framework

  2. mo_caffe.py -- dedicated script for Caffe models conversion

  3. mo_mxnet.py -- dedicated script for MXNet models conversion

  4. mo_tf.py -- dedicated script for TensorFlow models conversion

  5. mo_onnx.py -- dedicated script for ONNX models conversion

  6. mo_kaldi.py -- dedicated script for Kaldi models conversion

mo.py can deduce original framework where input model was trained by an extension of the model file. Or --framework option can be used for this purpose if model files don't have standard extensions (.pb - for TensorFlow models, .params - for MXNet models, .caffemodel - for Caffe models). So, the following commands are equivalent::

    python3 mo.py --input_model /user/models/model.pb
    python3 mo.py --framework tf --input_model /user/models/model.pb

The following examples illustrate the shortest command lines to convert a model per framework.

Convert TensorFlow model

To convert a frozen TensorFlow model contained in binary file model-file.pb, run dedicated entry point mo_tf.py:

python3 mo_tf.py --input_model model-file.pb

Convert Caffe model

To convert a Caffe model contained in model-file.prototxt and model-file.caffemodel run dedicated entry point mo_caffe.py:

    python3 mo_caffe.py --input_model model-file.caffemodel

Convert MXNet model

To Convert an MXNet model in model-file-symbol.json and model-file-0000.params run dedicated entry point mo_mxnet.py:

    python3 mo_mxnet.py --input_model model-file

Note

: for TensorFlow* all Placeholder ops are represented as Input layers in the final IR.

Convert ONNX* model

The Model Optimizer assumes that you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.

Use the mo_onnx.py script to simply convert a model with the path to the input model .onnx file:

    python3 mo_onnx.py --input_model model-file.onnx

Input channels re-ordering, scaling, subtraction of mean values and other preprocessing features are not applied by default. To pass necessary values to Model Optimizer, please run mo.py (or mo_tf.py, mo_caffe.py, mo_mxnet.py) with --help and examine all available options.

Working with Inference Engine

To the moment, Inference Engine is the only consumer of IR models that Model Optimizer produces. The whole workflow and more documentation on the structure of IR are documented in the Developer Guide of Inference Engine. Note that sections about running Model Optimizer refer to the old version of the tool and can not be applied to the current version of Model Optimizer.

Setup development environment

How to run unit-tests

  1. Run tests with:
    python -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]

How to capture unit-tests coverage

  1. Run tests with:
    coverage run -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]
  1. Build html report:
    coverage html

How to run code linting

  1. Run the following command:
    pylint mo/ mo.py

How to check requirements dependencies

  1. Run the following command:
    cat requirements_file | docker run -i --rm pyupio/safety safety check --stdin

Note

: here requirements_file is one of the following: requirements.txt, requirements_caffe.txt, requirements_tf.txt, requirements_tf2.txt, requirements_mxnet.txt, requirements_dev.txt.