Files
openvino/model-optimizer
Anton Chetverikov 2c4afd8cd4 Tensor names support in MO IR Reader (#4194)
* Added attributes save modes

* Added tensor names to IR

* Reformat code

* Add support for tensor names in MO IR Reader

* Unit tests and code refactoring

* Fixed error

* Code refactoring

* Code refactoring

* Code refactoring

* Error fixed

* Error fixed

* Bug fixed

* Bug fixed

* Additional unit tests and comments

* Small update

* Update fake infer function

* Update names restoring

* optimize imports

* Add support for old-style constants and for commas in reader

* Added dest mode in Fuse Mul

* Update default values

* Fix missed debug info in some specific cases

* Fix a lot of issues with missedand wrong names provoding

* Resolve review comments

* Update test IR's

* Refactor and simplify code

* More simplification

* Remove unneccessary changes

* model-optimizer/mo/utils/ir_reader/layer_to_class_test.py

* Add separate tests for names restoring

* Update copyright year

* Apply review comments

Co-authored-by: Anastasia Popova <anastasia.popova@intel.com>
2021-02-18 10:38:04 +03:00
..
2020-05-06 23:38:42 +03:00
2020-04-15 21:46:27 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2020-02-11 22:48:49 +03:00
2021-02-10 17:19:06 +03:00
2021-02-09 10:47:14 +03:00

Prerequisites

Model Optimizer requires:

  1. Python 3 or newer

  2. [Optional] Please read about use cases that require Caffe* to be available on the machine in the documentation.

Installation instructions

  1. Go to the Model Optimizer folder:
    cd PATH_TO_INSTALL_DIR/deployment_tools/model_optimizer
  1. Create virtual environment and activate it. This option is strongly recommended as it creates a Python sandbox and dependencies for the Model Optimizer do not influence global Python configuration, installed libraries etc. At the same time, special flag ensures that system-wide Python libraries are also available in this sandbox. Skip this step only if you do want to install all Model Optimizer dependencies globally:

    • Create environment:
          virtualenv -p /usr/bin/python3.6 .env3 --system-site-packages
        
    • Activate it:
        . .env3/bin/activate
      
  2. Install dependencies. If you want to convert models only from particular framework, you should use one of available requirements_*.txt files corresponding to the framework of choice. For example, for Caffe use requirements_caffe.txt and so on. When you decide to switch later to other frameworks, please install dependencies for them using the same mechanism:

    pip3 install -r requirements.txt
    

    Or you can use the installation scripts from the "install_prerequisites" directory.

  3. [OPTIONAL] If you use Windows OS, most probably you get python version of protobuf library. It is known to be rather slow, and you can use a boosted version of library by building the .egg file (Python package format) yourself, using instructions below (section 'How to boost Caffe model loading') for the target OS and Python, or install it with the pre-built .egg (it is built for Python 3.4, 3.5, 3.6, 3.7):

         python3 -m easy_install protobuf-3.6.1-py3.6-win-amd64.egg
    

    It overrides the protobuf python package installed by the previous command.

    Set environment variable to enable boost in protobuf performance:

         set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
    

Setup development environment

How to run unit-tests

  1. Run tests with:
    python -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]

How to capture unit-tests coverage

  1. Run tests with:
    coverage run -m unittest discover -p "*_test.py" [-s PATH_TO_DIR]
  1. Build html report:
    coverage html

How to run code linting

  1. Run the following command:
    pylint mo/ extensions/ mo.py