Files
openvino/model-optimizer
Gleb Kazantaev b291ca8cfa Use Serialization as a default engine in MO (#5347)
* Use Serialization as a default engine in MO

* Added cmd option to use old serialization

* Added mapping file generation

* Test mapping file generation

* Fix setBatchsize parameters order; fix mapping file generation

* Added FrameworkNode; added method to read models with custom ops but without extensions

* Added python API for read_network_without_extensions function; updated mo not to use IECore

* Added read_model_without_extensions to IReader and IParser

* Fix V7 IR reader

* Fix pword value

* Fix dllexport macro usage

* Add metainfo to IR

* Fix nGraph code style

* Fix license header

* Restore prepare_emit_ir behaviour

* Fix compare_function to resolve situation when Result input port has multiple names

* Update Compare Functions

* Fix FrameworkNode validation

* Self-review

* CodeStyle check

* --use_fallback -> --use_legacy_ir_generation

* Sort imports in main.py

* --path_to_model -> --input_model

* Use logging instead of print

* Code simplifucation&cleanup

* Fix offline_Transformations key

* Fix GeneraeMappingFile comments

* Use Extension approach to work with custom ops

* Fix versions check

* Code clean-up

* Moved FrameworkNode to inference_engine_transformations library

* Fix FrameworkNode includes

* Code clean-up
2021-05-04 16:40:20 +03:00
..
2021-04-29 14:38:08 +03:00
2021-04-29 14:38:08 +03:00
2021-04-29 14:38:08 +03:00
2021-03-22 19:35:32 +03:00
2020-04-15 21:46:27 +03:00

Prerequisites

Model Optimizer requires:

  1. Python 3 or newer

  2. [Optional] Please read about use cases that require Caffe* to be available on the machine in the documentation.

Installation instructions

  1. Go to the Model Optimizer folder:
    cd PATH_TO_INSTALL_DIR/deployment_tools/model_optimizer
  1. Create virtual environment and activate it. This option is strongly recommended as it creates a Python sandbox and dependencies for the Model Optimizer do not influence global Python configuration, installed libraries etc. At the same time, special flag ensures that system-wide Python libraries are also available in this sandbox. Skip this step only if you do want to install all Model Optimizer dependencies globally:

    • Create environment:
          virtualenv -p /usr/bin/python3.6 .env3 --system-site-packages
        
    • Activate it:
        . .env3/bin/activate
      
  2. Install dependencies. If you want to convert models only from particular framework, you should use one of available requirements_*.txt files corresponding to the framework of choice. For example, for Caffe use requirements_caffe.txt and so on. When you decide to switch later to other frameworks, please install dependencies for them using the same mechanism:

    pip3 install -r requirements.txt
    

    Or you can use the installation scripts from the "install_prerequisites" directory.

  3. [OPTIONAL] If you use Windows OS, most probably you get python version of protobuf library. It is known to be rather slow, and you can use a boosted version of library by building the .egg file (Python package format) yourself, using instructions below (section 'How to boost Caffe model loading') for the target OS and Python, or install it with the pre-built .egg (it is built for Python 3.4, 3.5, 3.6, 3.7):

         python3 -m easy_install protobuf-3.6.1-py3.6-win-amd64.egg
    

    It overrides the protobuf python package installed by the previous command.

    Set environment variable to enable boost in protobuf performance:

         set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
    

Setup development environment

How to run unit-tests

  1. Run tests with:
    python -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]

How to capture unit-tests coverage

  1. Run tests with:
    coverage run -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]
  1. Build html report:
    coverage html

How to run code linting

  1. Run the following command:
    pylint mo/ extensions/ mo.py