Files
openvino/model-optimizer/extensions/middle/InterpolateSequenceToInterpolate.py
Evgeny Lazarev 3775dad345 MO dynamic shapes support (#5918)
* Allow MO to generate IR with -1 in dimensions

* Some fixes to support -1 for StridedSlice operation

* Updated TensorArrayGatherV3 shape infer to support dynamic output shape

* Several fixes to support undefined dimensions in the Broadcast,Reshape,Slice and Tile

* Fixed bug in the normalization transformation of TF NMS to opset NMS

* Updated shape infer functions related to StridedSlice and NMS

* Updated Select shape inference function to use common shape broadcasting function supporting dynamism

* Fixed operation TFResize shape infer function to work correctly for case when model is converted with --disable_nhwc_to_nchw

* Dynamic Range and update asserts in NMS

* Changed the way how dynamic dimensions are specified. Refactored shape inference functions and common places to use new approach

* More fixes to support dynamic shapes

* More fixes for support of dynamic shapes

* Fixed generation of IR with dynamic dimensions

* Allow reading IRs with undefined dimensions

* More changes in the IE to support dynamic dimensions

* Fixes for Switch, Merge, Concat shape and value infer related to dynamism

* Fixed TensorArray related ops to properly handle dynamic dimensions. Fixed StridedSlice infer for case with new_axis

* Fixed shape_for_layout function to generate masked array

* Fixed shape inference for Convolution and Poolings to support dynamic spatial dimensions

* Updated shape infer functions for CTCGreedyDecotder, CTCLoss and Enter

* Fixed shape inference with dynamic dimensions for MatMul, Split, Upsample, SpaceToBatch, some fixes for the TI

* Fixes for undefined dimensions support for Proposal and DetectionOutput

* Fixed ExtractImagePatches, DepthToSpace and RegionYolo shape infer functions to work with partially dynamic dimensions

* Changes in tf_window_op_pad_infer to better work with dynamic dimensions

* Fixed output shape calculation for StridedSlice operation

* More StridedSlice fixes

* Fixed resolve_convolution_with_group

* Fixed unit tests

* Fixed unit tests

* Fixed Switch op unit tests

* Fixed shape inference for Upsample operation

* Updated unit tests for the Concat operation

* Fixed eltwise shape infer unit tests

* Fixed shape infer tests for Convolution and DetectionOutput ops

* Fixed Crop shape infer function tests

* Fixed Slice op unit test and minor fix in the shape inference. Fixed emitter

* Updated unit test for telemetry and match_shape function for dynamism

* Fixed unit test for the DetectionOutput

* Added support for the TF ClipByValue operation

* Fixed GatherND shape inference for dynamic shapes support

* Dynamic shapes support for the MO IR Reader

* Fixed BlockLSTM operation to not work as an extractor

* Allow to serialize IRs with partially defined shapes

* Updated SelectBroadcast transformation to not check shape values

* Fixed MO IR comparator

* Fixed SS value propagation when slices are dynamic

* Do not re-run graph clean-up for ProposalMutation

* Fixed InterpolateSequenceToInterpolate transformation to support dynamic dimensions

* Fixed Loop iteration count calculation and reading IteratorGetNext shapes

* Fixed unit test for serialization

* Fixed serialization test

* Fixed RandomUniform shape infer

* Fixed several transformations related to RNN to respect dynamic output shapes

* Fixed Deconvolutin shape calculation for dynamic batch. Eltwise shape infer improvements

* Fixed shape infer functions for ExperimentalDetectron ops, reverted changes for NonZero and removed debug prints

* Fixed check for dynamism of a list, fixed value propagation for Concat op and remove redundant shape infer for reshape

* Update Eltwise value propagation to use np.ma

* Fixed ExpandDims shape infer function

* Shape infer functions fixes and improvements

* Remove Accum op from the MO

* Updated activation functions shape infer

* Removed unsupported operation Correlation

* Fixed shape infers for several functions

* Removed unsupported DataAugmentation operation

* Fixed shape infer functions for several ops in extensions directory

* Removed not-support operation PowerFile

* Removed unsupported SpatialTransformer,SimplerNMS and PredictionHeatmap operations

* More shape infer functions updates

* Merge shape infer fix

* Fixed typo

* Fixed TensorArraySize shape infer function

* Fixed VariadicSplit and Squeeze shape infer

* Fixed ONNX models Parameter extractor

* Updated Select value propagation for the dynamic case

* Fixed ReorgYolo shape infer and test

* Removed unnecessary tests

* Fixed Tile shape infer

* Fixed SparseFillEmptryRows unit tests

* Fixed package bom

* Added extractor for the TF operation Mod

* Fixed value propagation for MatMul operation

* Updated Parameter extender to generate shape_array when shape is partially defined only

* Fixed BOM file

* Fixed issue with the TF OD API models and DetectionOutput op. Now the shape infer function for the DO do not re-infer "num_classes" attribute value if it is already known

* Fixed unit test for the DO infer

* Fixed num classes calculation for the DO generation for Faster/Mask-RCNN models

* Changed NMS op to produce static output shape

* Restore dynamic output shape calculation for the NMS for NMS-5

* Fixed CellNormalizer transformation. It should work for static shapes only

* RNNCell Op class fixes

* Revert some changes

* Updated documentation with a list of supported operations

* Revert changes

* Fixes for the ConstantFill op

* Removed redundant SequenceLengthToMask transformation

* TensorArray* ops shape infer code style and refactoring

* Reverse some unnecessary changes in the ConvolutionNormalizer

* Fixes and unit tests for shape_array, compare_shapes, is_fully_defined functions

* Implemented shape_insert, shape_delete functions and tests for them

* Modified code to use shape_delete function

* Added usage of shape_insert function where necessary

* Use shape_insert function in many places

* Some fixes in shape inference for various ops

* Updated shape_delete function to support negative indices

* Changes and unit tests for the MatMul infer function

* Removed strange code from the TF Merge infer function

* Merge op shape infer fixes

* Fixed value propagation in the transformation EltwiseInputReshape.py for the dynamic dimension case

* Code cleanup

* Updated GatherND to support dynamic dimensions

* Minor fixes

* Fixed shape_insert and shape_delete to support np.int64 and np.int32 types

* Updated Upsample operation unit tests with dynamic input shapes

* Minor change in the extensions/back/ConvolutionNormalizer.py to make sure that input dimensions are static

* Fixed ConvertGroupedStridedSlice transformation and added unit tests

* Revert debug changes

* Fixed value propagation for Unsqueeze to work with partially defined input values

* Typo fix

* Added unit tests for the Unsqueeze op shape infer

* broadcasting functions changes and unit tests

* Fixed Tile value inference for partially defined input tensor

* Unit tests for Split and VariadicSplit ops

* Fixes for the Concat infer + unit tests

* Removed redundant tf_pack shape infer

* Fixed Concat value infer and added unit tests

* Fixed StridedSlice shape inference for case with dynamic slices

* Fixes related to StridedSlice shape infer, changes in tests

* Unit tests for the eltwise shape and value infer

* Fixed Pad op value propagation to allow dynamic input values to be propagated

* Unit test for Pooling dynamic input shape infer

* Squeeze op unit tests for dynamic input shape

* Added assert to the Squeeze op shape infer for case when squeeze dimension is dynamic value

* Added message to the MO when input shapes are dynamic

* Convolution dynamic unit test

* Removed redundant transformation GroupedConvWeightsNormalize

* Removed non-ascii character from the message

* Fixed typo in the BOM file

* Code style and comment fixes

* Fixed copy-paste issue in the DO shape infer function

* Fixed setting dynamic shape in the MO command line

* Added function to compare tensor with dynamic values. Fixes in the unit tests and shape infer functions

* Improved Reshape shape infer + added unit tests

* Fixed value propagation for Select op

* Renamed several internal functions, minor code fixes.

* Code style fixes

* Modified condition in the _set_shape method of the Port class to not check shape if the "override_output_shape" attribute is specified

* Fixed constant value propagation for ReduceOps when inputs have dynamic values. Added unit test

* Fixed shape infer for the Loop for dynamic dimensions case

* Fix in the NMS shape infer to avoid ragged numpy array generation. Fixed Scatter shape infer validation

* Improved shapes infer for eltwise ops with respect to dynamic dimensions

* Changed code comments

* Renamed tensor names in the ClipByValueTFTransformation

* Changed np.ma.allequal to strict_compare_tensors in the Merge op infer

* Chanded np.ma.allequal with strict_compare_tensor.

* Fixed Merge op value infer

* Fixed debug code

* Removed commented line

* Updated condition to check for dynamic shapes in the Partial infer to not fail for MxNet models

* Improvements to the get_shape_from_slice and is_dynamic_slice functions

* Reverted change in the `normalize_slices_attr` for ellipsis mask case

* Updated shape conditions in the ScatterNDBase op to support dynamic dimensions

* Crop op file refactoring

* Set "type" attribute to None for SparseFillEmptyRows op which is not from any opset

* Removed unnecessary extractor test

* Restored Crop operation type

* Removed "type" attribute from the Crop operation and updated the MO code to find Crop by "op" attribute

* Fixed If shape infer function to produce dynamic dimensions

* Updated If shape and value infer to properly work when condition is static

* Fixed fusing transformation check to work with dynamic dimensions. Change comparison in the shape_inference function to not use strict shapes comparison

* Optimize imports in the LayerNorm

* ConvertGroupedStridedSlice minor fixes related to dynamism support

* Fixed ConvertGroupedStridedSlice to properly check if the dimension is sliced
2021-09-01 14:35:06 +03:00

261 lines
11 KiB
Python

# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging as log
import numpy as np
from typing import List
from extensions.ops.interpolate import Interpolate
from mo.front.common.partial_infer.utils import int64_array, shape_array
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, Node, rename_nodes
from mo.middle.replacement import MiddleReplacementPattern
from mo.utils.error import Error
from mo.utils.utils import group_by_with_binary_predicate
def node_has_one_consumer(node: Node) -> bool:
return len(node.out_port(0).get_destinations()) == 1
def is_next(first: Node, second: Node) -> bool:
"""
This function checks if 'first' is predecessor of 'second'. The node 'first' is called to be
a predecessor of the node 'second', if an output of 'first' is an input of 'second', and
number of destinations of 'first' is equal to 1.
:param first: an Interpolate layer
:param second: another Interpolate layer
:return: True, if 'first' is an predecessor of 'second', and False otherwise.
"""
dests = first.out_port(0).get_destinations()
if node_has_one_consumer(first):
return second.id == dests[0].node.id
elif first.soft_get('maybe_part_of_sequence', False):
return len(dests) == 2 and second.id in [d.node.id for d in dests]
return False
class CanBeFused:
def __init__(self):
# We need to accumulate set of axes of compared nodes, because there can be a sequence of a set of axes
# {i}{j}{i}
self.accumulated_axes = set()
self.default_values_for_opset4 = {
'mode': None,
'shape_calculation_mode': None,
'coordinate_transformation_mode': 'half_pixel',
'nearest_mode': 'round_prefer_floor',
'antialias': 0,
'cube_coeff': -0.75
}
self.default_pads = int64_array([0])
def _compare_attributes_of_interpolate1(self, first: Node, second: Node) -> bool:
"""
This function checks whether attributes of Interpolate-1 nodes first and second are identical
(except attribute 'axes').
:param first: the first of compared nodes
:param second: the second of compared nodes
:return: True, if attributes of nodes are identical and False otherwise
"""
# If some of attributes 'mode', 'align_corners', 'antialias', 'pads_begin', 'pads_end' are different,
# then attributes of nodes are not identical.
op = Interpolate(graph=first.graph, attrs={})
for attr in ['mode', 'align_corners', 'antialias', 'pads_begin', 'pads_end']:
if first.soft_get(attr, default=op.attrs[attr]) != second.soft_get(attr, default=op.attrs[attr]):
return False
return True
def _compare_attributes_of_interpolate4(self, first: Node, second: Node) -> bool:
"""
This function checks whether attributes of Interpolate-4 nodes first and second are identical.
:param first: the first of compared nodes
:param second: the second of compared nodes
:return: True, if attributes of nodes are identical and False otherwise
"""
# If some of attributes 'mode', 'coordinate_transformation_mode', 'nearest_mode', 'antialias', 'cube_coeff'
# are different, then attributes of first and second are not identical.
for attr in self.default_values_for_opset4.keys():
default_value = self.default_values_for_opset4[attr]
if first.soft_get(attr, default=default_value) != second.soft_get(attr, default=default_value):
return False
# If attributes 'pads_begin' or 'pads_end' of nodes first and second are different, then attributes
# of first and second are not identical.
for attr in ['pads_begin', 'pads_end']:
if not np.array_equal(first.soft_get(attr, default=self.default_pads),
second.soft_get(attr, default=self.default_pads)):
return False
return True
def _compare_attributes(self, first: Node, second: Node) -> bool:
"""
This function checks whether attributes of nodes first and second are identical (except attribute 'axes').
:param first: the first of compared nodes
:param second: the second of compared nodes
:return: True, if attributes of nodes are identical and False otherwise
"""
# If opsets of nodes are different, then nodes have different attributes.
fst_opset = first.get_opset()
snd_opset = second.get_opset()
if fst_opset != snd_opset:
return False
if fst_opset not in ['opset1', 'opset4']:
fst_name = first.soft_get('name', first.id)
snd_name = second.soft_get('name', second.id)
raise Error('Unsupported opset {} for nodes with names {} and {}'.format(fst_opset, fst_name, snd_name))
if fst_opset == 'opset1':
return self._compare_attributes_of_interpolate1(first, second)
else:
return self._compare_attributes_of_interpolate4(first, second)
def __call__(self, first: Node, second: Node) -> bool:
"""
This function checks whether Interpolate nodes 'first' and 'second' can be fused.
:param first: the first of fused nodes
:param second: the second of fused nodes
:return: True, if nodes can be fused, and False otherwise
"""
if not (is_next(first, second) and self._compare_attributes(first, second)):
self.accumulated_axes = set()
return False
fst_axes = set([a for a in Interpolate.get_axes(first)])
snd_axes = set([a for a in Interpolate.get_axes(second)])
self.accumulated_axes = self.accumulated_axes | fst_axes
# If the set of accumulated axes and the set of axes of 'second' do not intersect then nodes can be fused,
# because interpolations with respect to various axes do not affect each other.
if not(self.accumulated_axes & snd_axes):
return True
# Otherwise, nodes cannot be fused.
self.accumulated_axes = set()
return False
def get_interpolate_attributes(node: Node) -> dict:
opset_to_default_values = {
'opset1': {
'mode': None,
'align_corners': 0,
'antialias': 0,
'pads_begin': 0,
'pads_end': 0,
'version': 'opset1'
},
'opset4': {
'mode': None,
'shape_calculation_mode': None,
'antialias': 0,
'pads_begin': int64_array([0]),
'pads_end': int64_array([0]),
'coordinate_transformation_mode': 'half_pixel',
'nearest_mode': 'round_prefer_floor',
'cube_coeff': -0.75,
'version': 'opset4'
},
}
opset = node.get_opset()
result = {}
if opset in opset_to_default_values:
default_values = opset_to_default_values[opset]
for attr in default_values.keys():
value = node.soft_get(attr, default=default_values[attr])
result[attr] = value
return result
else:
raise Error('Unsupported opset {} for node with name {}.'.format(opset, node.soft_get('name', node.id)))
def replace_sequence(seq: List[Node], graph: Graph):
"""
This function replaces a sequence of consecutive Interpolate layers with one Interpolate layer,
if modes of all nodes of a sequence are the same.
:param seq: sequence of Interpolate layers
:param graph: graph to which nodes of seq belong
:return: Nothing
"""
if not seq:
return
if len(seq) == 1:
return
modes = set([n.mode for n in seq])
if len(modes) != 1:
return
dims_and_scales_ = []
# Each element of the list dims_and_scales_ is a pair
# (axis, output size for this axis) (opset1)
# or
# (axis, output size for this axis, output scales for this axis) (opset4)
if seq[0].get_opset() == 'opset1':
for interp in seq:
dims_and_scales_.extend(zip(Interpolate.get_axes(interp),
interp.in_port(1).get_connection().get_source().data.get_value()))
axis_to_size = sorted(list(dict(dims_and_scales_).items()), key=lambda x: x[0])
axes_of_node = int64_array([z[0] for z in axis_to_size])
sizes = shape_array([z[1] for z in axis_to_size])
scales = np.ones(len(axis_to_size))
else:
for interp in seq:
dims_and_scales_.extend(zip(Interpolate.get_axes(interp),
interp.in_port(1).get_connection().get_source().data.get_value(),
interp.in_port(2).get_connection().get_source().data.get_value()))
axis_to_size = sorted(dims_and_scales_, key=lambda x: x[0])
axes_of_node = int64_array([z[0] for z in axis_to_size])
sizes = shape_array([z[1] for z in axis_to_size])
scales = np.array([z[2] for z in axis_to_size])
fst_interp_node = seq[0]
last_interp_node = seq[-1]
last_interp_node_name = last_interp_node.soft_get('name', last_interp_node.id)
attributes = get_interpolate_attributes(fst_interp_node)
opset = fst_interp_node.get_opset()
if opset == 'opset1':
attributes['axes'] = axes_of_node
interp_node = create_op_with_const_inputs(graph, Interpolate, {1: sizes}, attributes)
fst_interp_connection = fst_interp_node.in_port(0).get_connection()
fst_interp_connection.set_destination(interp_node.in_port(0))
last_interp_node.out_port(0).get_connection().set_source(interp_node.out_port(0))
else:
attributes['in_ports_count'] = 4
interp_node = create_op_with_const_inputs(graph, Interpolate,
{1: sizes, 2: scales, 3: axes_of_node},
attributes)
fst_interp_connection = fst_interp_node.in_port(0).get_connection()
fst_interp_connection.set_destination(interp_node.in_port(0))
last_interp_node.out_port(0).get_connection().set_source(interp_node.out_port(0))
rename_nodes([(last_interp_node, last_interp_node_name + '/delete'), (interp_node, last_interp_node_name)])
class InterpolateSequenceToInterpolate(MiddleReplacementPattern):
"""
This transformation replaces a sequence of Interpolate layers by one Interpolate layer.
"""
enabled = True
def run_before(self):
from extensions.middle.UpsampleToResample import UpsampleToResample
return [UpsampleToResample]
def find_and_replace_pattern(self, graph: Graph):
log.debug('Enabled replacement of a sequence of Interpolate layers with one Interpolate layer.')
interps = [n for n in graph.pseudo_topological_sort() if n.kind == 'op' and n.op == 'Interpolate']
fuser = CanBeFused()
sequences = group_by_with_binary_predicate(interps, fuser)
for seq in sequences:
replace_sequence(seq, graph)