Files
openvino/model-optimizer/extensions/ops/interpolate.py
Evgeny Lazarev 3775dad345 MO dynamic shapes support (#5918)
* Allow MO to generate IR with -1 in dimensions

* Some fixes to support -1 for StridedSlice operation

* Updated TensorArrayGatherV3 shape infer to support dynamic output shape

* Several fixes to support undefined dimensions in the Broadcast,Reshape,Slice and Tile

* Fixed bug in the normalization transformation of TF NMS to opset NMS

* Updated shape infer functions related to StridedSlice and NMS

* Updated Select shape inference function to use common shape broadcasting function supporting dynamism

* Fixed operation TFResize shape infer function to work correctly for case when model is converted with --disable_nhwc_to_nchw

* Dynamic Range and update asserts in NMS

* Changed the way how dynamic dimensions are specified. Refactored shape inference functions and common places to use new approach

* More fixes to support dynamic shapes

* More fixes for support of dynamic shapes

* Fixed generation of IR with dynamic dimensions

* Allow reading IRs with undefined dimensions

* More changes in the IE to support dynamic dimensions

* Fixes for Switch, Merge, Concat shape and value infer related to dynamism

* Fixed TensorArray related ops to properly handle dynamic dimensions. Fixed StridedSlice infer for case with new_axis

* Fixed shape_for_layout function to generate masked array

* Fixed shape inference for Convolution and Poolings to support dynamic spatial dimensions

* Updated shape infer functions for CTCGreedyDecotder, CTCLoss and Enter

* Fixed shape inference with dynamic dimensions for MatMul, Split, Upsample, SpaceToBatch, some fixes for the TI

* Fixes for undefined dimensions support for Proposal and DetectionOutput

* Fixed ExtractImagePatches, DepthToSpace and RegionYolo shape infer functions to work with partially dynamic dimensions

* Changes in tf_window_op_pad_infer to better work with dynamic dimensions

* Fixed output shape calculation for StridedSlice operation

* More StridedSlice fixes

* Fixed resolve_convolution_with_group

* Fixed unit tests

* Fixed unit tests

* Fixed Switch op unit tests

* Fixed shape inference for Upsample operation

* Updated unit tests for the Concat operation

* Fixed eltwise shape infer unit tests

* Fixed shape infer tests for Convolution and DetectionOutput ops

* Fixed Crop shape infer function tests

* Fixed Slice op unit test and minor fix in the shape inference. Fixed emitter

* Updated unit test for telemetry and match_shape function for dynamism

* Fixed unit test for the DetectionOutput

* Added support for the TF ClipByValue operation

* Fixed GatherND shape inference for dynamic shapes support

* Dynamic shapes support for the MO IR Reader

* Fixed BlockLSTM operation to not work as an extractor

* Allow to serialize IRs with partially defined shapes

* Updated SelectBroadcast transformation to not check shape values

* Fixed MO IR comparator

* Fixed SS value propagation when slices are dynamic

* Do not re-run graph clean-up for ProposalMutation

* Fixed InterpolateSequenceToInterpolate transformation to support dynamic dimensions

* Fixed Loop iteration count calculation and reading IteratorGetNext shapes

* Fixed unit test for serialization

* Fixed serialization test

* Fixed RandomUniform shape infer

* Fixed several transformations related to RNN to respect dynamic output shapes

* Fixed Deconvolutin shape calculation for dynamic batch. Eltwise shape infer improvements

* Fixed shape infer functions for ExperimentalDetectron ops, reverted changes for NonZero and removed debug prints

* Fixed check for dynamism of a list, fixed value propagation for Concat op and remove redundant shape infer for reshape

* Update Eltwise value propagation to use np.ma

* Fixed ExpandDims shape infer function

* Shape infer functions fixes and improvements

* Remove Accum op from the MO

* Updated activation functions shape infer

* Removed unsupported operation Correlation

* Fixed shape infers for several functions

* Removed unsupported DataAugmentation operation

* Fixed shape infer functions for several ops in extensions directory

* Removed not-support operation PowerFile

* Removed unsupported SpatialTransformer,SimplerNMS and PredictionHeatmap operations

* More shape infer functions updates

* Merge shape infer fix

* Fixed typo

* Fixed TensorArraySize shape infer function

* Fixed VariadicSplit and Squeeze shape infer

* Fixed ONNX models Parameter extractor

* Updated Select value propagation for the dynamic case

* Fixed ReorgYolo shape infer and test

* Removed unnecessary tests

* Fixed Tile shape infer

* Fixed SparseFillEmptryRows unit tests

* Fixed package bom

* Added extractor for the TF operation Mod

* Fixed value propagation for MatMul operation

* Updated Parameter extender to generate shape_array when shape is partially defined only

* Fixed BOM file

* Fixed issue with the TF OD API models and DetectionOutput op. Now the shape infer function for the DO do not re-infer "num_classes" attribute value if it is already known

* Fixed unit test for the DO infer

* Fixed num classes calculation for the DO generation for Faster/Mask-RCNN models

* Changed NMS op to produce static output shape

* Restore dynamic output shape calculation for the NMS for NMS-5

* Fixed CellNormalizer transformation. It should work for static shapes only

* RNNCell Op class fixes

* Revert some changes

* Updated documentation with a list of supported operations

* Revert changes

* Fixes for the ConstantFill op

* Removed redundant SequenceLengthToMask transformation

* TensorArray* ops shape infer code style and refactoring

* Reverse some unnecessary changes in the ConvolutionNormalizer

* Fixes and unit tests for shape_array, compare_shapes, is_fully_defined functions

* Implemented shape_insert, shape_delete functions and tests for them

* Modified code to use shape_delete function

* Added usage of shape_insert function where necessary

* Use shape_insert function in many places

* Some fixes in shape inference for various ops

* Updated shape_delete function to support negative indices

* Changes and unit tests for the MatMul infer function

* Removed strange code from the TF Merge infer function

* Merge op shape infer fixes

* Fixed value propagation in the transformation EltwiseInputReshape.py for the dynamic dimension case

* Code cleanup

* Updated GatherND to support dynamic dimensions

* Minor fixes

* Fixed shape_insert and shape_delete to support np.int64 and np.int32 types

* Updated Upsample operation unit tests with dynamic input shapes

* Minor change in the extensions/back/ConvolutionNormalizer.py to make sure that input dimensions are static

* Fixed ConvertGroupedStridedSlice transformation and added unit tests

* Revert debug changes

* Fixed value propagation for Unsqueeze to work with partially defined input values

* Typo fix

* Added unit tests for the Unsqueeze op shape infer

* broadcasting functions changes and unit tests

* Fixed Tile value inference for partially defined input tensor

* Unit tests for Split and VariadicSplit ops

* Fixes for the Concat infer + unit tests

* Removed redundant tf_pack shape infer

* Fixed Concat value infer and added unit tests

* Fixed StridedSlice shape inference for case with dynamic slices

* Fixes related to StridedSlice shape infer, changes in tests

* Unit tests for the eltwise shape and value infer

* Fixed Pad op value propagation to allow dynamic input values to be propagated

* Unit test for Pooling dynamic input shape infer

* Squeeze op unit tests for dynamic input shape

* Added assert to the Squeeze op shape infer for case when squeeze dimension is dynamic value

* Added message to the MO when input shapes are dynamic

* Convolution dynamic unit test

* Removed redundant transformation GroupedConvWeightsNormalize

* Removed non-ascii character from the message

* Fixed typo in the BOM file

* Code style and comment fixes

* Fixed copy-paste issue in the DO shape infer function

* Fixed setting dynamic shape in the MO command line

* Added function to compare tensor with dynamic values. Fixes in the unit tests and shape infer functions

* Improved Reshape shape infer + added unit tests

* Fixed value propagation for Select op

* Renamed several internal functions, minor code fixes.

* Code style fixes

* Modified condition in the _set_shape method of the Port class to not check shape if the "override_output_shape" attribute is specified

* Fixed constant value propagation for ReduceOps when inputs have dynamic values. Added unit test

* Fixed shape infer for the Loop for dynamic dimensions case

* Fix in the NMS shape infer to avoid ragged numpy array generation. Fixed Scatter shape infer validation

* Improved shapes infer for eltwise ops with respect to dynamic dimensions

* Changed code comments

* Renamed tensor names in the ClipByValueTFTransformation

* Changed np.ma.allequal to strict_compare_tensors in the Merge op infer

* Chanded np.ma.allequal with strict_compare_tensor.

* Fixed Merge op value infer

* Fixed debug code

* Removed commented line

* Updated condition to check for dynamic shapes in the Partial infer to not fail for MxNet models

* Improvements to the get_shape_from_slice and is_dynamic_slice functions

* Reverted change in the `normalize_slices_attr` for ellipsis mask case

* Updated shape conditions in the ScatterNDBase op to support dynamic dimensions

* Crop op file refactoring

* Set "type" attribute to None for SparseFillEmptyRows op which is not from any opset

* Removed unnecessary extractor test

* Restored Crop operation type

* Removed "type" attribute from the Crop operation and updated the MO code to find Crop by "op" attribute

* Fixed If shape infer function to produce dynamic dimensions

* Updated If shape and value infer to properly work when condition is static

* Fixed fusing transformation check to work with dynamic dimensions. Change comparison in the shape_inference function to not use strict shapes comparison

* Optimize imports in the LayerNorm

* ConvertGroupedStridedSlice minor fixes related to dynamism support

* Fixed ConvertGroupedStridedSlice to properly check if the dimension is sliced
2021-09-01 14:35:06 +03:00

174 lines
6.0 KiB
Python

# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import math
import numpy as np
from mo.front.common.partial_infer.utils import int64_array, dynamic_dimension, dynamic_dimension_value
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.graph.perm_inputs import PermuteInputs
from mo.ops.op import Op, PermuteAttrs
def infer_for_opset4(node: Node):
assert len([p for p in node.in_ports().values() if not p.disconnected()]) in [3, 4], \
"Interpolate-4 node {} must have 3 or 4 inputs".format(node.soft_get(node.name, node.id))
assert node.has_valid('mode')
assert node.has_valid('shape_calculation_mode')
src_shape = node.in_port(0).data.get_shape()
assert src_shape is not None
input_rank = len(src_shape)
pads_begin = correct_pad(node.soft_get('pads_begin', [0]), input_rank)
pads_end = correct_pad(node.soft_get('pads_end', [0]), input_rank)
node['pads_begin'] = pads_begin
node['pads_end'] = pads_end
if len(node.in_ports()) == 3:
axes = list(range(0, input_rank))
else:
axes = node.in_port(3).get_source().data.get_value()
assert axes is not None, \
"Interpolate-4 node with name {} has None as 'axes' input".format(node.soft_get('name', node.id))
axes = int64_array(axes)
output_shape = src_shape + pads_begin + pads_end
if node.shape_calculation_mode == 'sizes':
dst_shape = node.in_port(1).data.get_value()
assert dst_shape is not None
correct_scales_using_dst_shape(node, dst_shape, src_shape, axes)
for i, axis in enumerate(axes):
output_shape[axis] = dst_shape[i]
else:
scales = node.in_port(2).data.get_value()
assert scales is not None
for i, axis in enumerate(axes):
if output_shape[axis] is not dynamic_dimension and scales[i] is not dynamic_dimension:
output_shape[axis] = math.floor(scales[i] * output_shape[axis] + 1.0e-5)
else:
output_shape[axis] = dynamic_dimension_value
if node.is_in_port_connected(3):
PermuteInputs().set_input_permutation(node.in_node(3), node, 'input:0', 'axis')
node.out_port(0).data.set_shape(output_shape)
def infer_for_opset1(node: Node):
assert len([p for p in node.in_ports().values() if not p.disconnected()]) == 2
assert node.has_valid('mode')
assert node.has_valid('axes')
src_shape = node.in_port(0).data.get_shape()
assert src_shape is not None
dst_shape = node.in_port(1).data.get_value()
assert dst_shape is not None
output_shape = src_shape.copy()
for ind, axis in enumerate(node.axes):
output_shape[axis] = dst_shape[ind]
node.out_port(0).data.set_shape(output_shape)
PermuteAttrs.create_permute_attrs(node, attrs=[('axes', 'input:0')])
def pad_attribute_to_str(node: Node, attr: str):
return ','.join(map(str, node[attr])) if node.has_valid(attr) else None
def correct_pad(pad, rank):
pad_len = len(pad)
if pad_len < rank:
return np.pad(pad, (0, rank - pad_len), 'constant').astype(np.int64)
elif pad_len > rank:
return np.array(pad[: rank]).astype(np.int64)
else:
return np.array(pad, dtype=np.int64)
def correct_scales_using_dst_shape(node, dst_shape, src_shape, axes):
scales_value = node.in_port(2).data.get_value()
if scales_value is None or len(scales_value) != len(dst_shape):
corrected_scales = np.zeros(len(dst_shape))
for i, axis in enumerate(list(axes)):
corrected_scales[i] = dst_shape[i] / src_shape[axis]
node.in_port(2).data.set_value(corrected_scales)
class Interpolate(Op):
op = 'Interpolate'
enabled = False
infers = {
'opset1': infer_for_opset1,
'opset4': infer_for_opset4
}
def __init__(self, graph: Graph, attrs: dict):
self.attributes_for_opsets = {
'opset1': [
('axes', lambda node: ','.join(map(str, node.axes))),
('antialias', lambda node: bool_to_str(node, 'antialias')),
('align_corners', lambda node: bool_to_str(node, 'align_corners')),
'mode', 'pads_begin', 'pads_end',
],
'opset4': [
'mode', 'nearest_mode', 'cube_coeff', 'coordinate_transformation_mode',
'shape_calculation_mode',
('antialias', lambda node: bool_to_str(node, 'antialias')),
('pads_begin', lambda node: pad_attribute_to_str(node, 'pads_begin')),
('pads_end', lambda node: pad_attribute_to_str(node, 'pads_end')),
]
}
mandatory_props = {
'op': self.op,
'type': self.op,
'version': 'opset1',
'axes': None,
'mode': None,
'align_corners': 0,
'antialias': 0,
'pads_begin': 0,
'pads_end': 0,
'infer': self.infer,
'force_precision_in_ports': {1: 'int64'},
'in_ports_count': 2,
'out_ports_count': 1,
}
super().__init__(graph, mandatory_props, attrs)
def supported_attrs(self):
opset = self.get_opset()
key = opset if opset in self.attributes_for_opsets else 'opset1'
return self.attributes_for_opsets[key]
def infer(self, node: Node):
opset = self.get_opset()
key = opset if opset in self.infers else 'opset1'
self.infers[key](node)
@staticmethod
def get_axes(node: Node) -> np.ndarray:
opset = node.get_opset()
if opset == 'opset1':
interp_axes = node.soft_get('axes', None)
return interp_axes if interp_axes is None else int64_array(interp_axes)
src_shape = node.in_port(0).data.get_shape()
assert src_shape is not None
input_rank = len(src_shape)
if len(node.in_ports()) == 3:
axes = list(range(0, input_rank))
else:
axes = node.in_port(3).get_source().data.get_value()
return int64_array(axes)