Files
openvino/model-optimizer/extensions/ops/dft.py
Evgeny Lazarev 3775dad345 MO dynamic shapes support (#5918)
* Allow MO to generate IR with -1 in dimensions

* Some fixes to support -1 for StridedSlice operation

* Updated TensorArrayGatherV3 shape infer to support dynamic output shape

* Several fixes to support undefined dimensions in the Broadcast,Reshape,Slice and Tile

* Fixed bug in the normalization transformation of TF NMS to opset NMS

* Updated shape infer functions related to StridedSlice and NMS

* Updated Select shape inference function to use common shape broadcasting function supporting dynamism

* Fixed operation TFResize shape infer function to work correctly for case when model is converted with --disable_nhwc_to_nchw

* Dynamic Range and update asserts in NMS

* Changed the way how dynamic dimensions are specified. Refactored shape inference functions and common places to use new approach

* More fixes to support dynamic shapes

* More fixes for support of dynamic shapes

* Fixed generation of IR with dynamic dimensions

* Allow reading IRs with undefined dimensions

* More changes in the IE to support dynamic dimensions

* Fixes for Switch, Merge, Concat shape and value infer related to dynamism

* Fixed TensorArray related ops to properly handle dynamic dimensions. Fixed StridedSlice infer for case with new_axis

* Fixed shape_for_layout function to generate masked array

* Fixed shape inference for Convolution and Poolings to support dynamic spatial dimensions

* Updated shape infer functions for CTCGreedyDecotder, CTCLoss and Enter

* Fixed shape inference with dynamic dimensions for MatMul, Split, Upsample, SpaceToBatch, some fixes for the TI

* Fixes for undefined dimensions support for Proposal and DetectionOutput

* Fixed ExtractImagePatches, DepthToSpace and RegionYolo shape infer functions to work with partially dynamic dimensions

* Changes in tf_window_op_pad_infer to better work with dynamic dimensions

* Fixed output shape calculation for StridedSlice operation

* More StridedSlice fixes

* Fixed resolve_convolution_with_group

* Fixed unit tests

* Fixed unit tests

* Fixed Switch op unit tests

* Fixed shape inference for Upsample operation

* Updated unit tests for the Concat operation

* Fixed eltwise shape infer unit tests

* Fixed shape infer tests for Convolution and DetectionOutput ops

* Fixed Crop shape infer function tests

* Fixed Slice op unit test and minor fix in the shape inference. Fixed emitter

* Updated unit test for telemetry and match_shape function for dynamism

* Fixed unit test for the DetectionOutput

* Added support for the TF ClipByValue operation

* Fixed GatherND shape inference for dynamic shapes support

* Dynamic shapes support for the MO IR Reader

* Fixed BlockLSTM operation to not work as an extractor

* Allow to serialize IRs with partially defined shapes

* Updated SelectBroadcast transformation to not check shape values

* Fixed MO IR comparator

* Fixed SS value propagation when slices are dynamic

* Do not re-run graph clean-up for ProposalMutation

* Fixed InterpolateSequenceToInterpolate transformation to support dynamic dimensions

* Fixed Loop iteration count calculation and reading IteratorGetNext shapes

* Fixed unit test for serialization

* Fixed serialization test

* Fixed RandomUniform shape infer

* Fixed several transformations related to RNN to respect dynamic output shapes

* Fixed Deconvolutin shape calculation for dynamic batch. Eltwise shape infer improvements

* Fixed shape infer functions for ExperimentalDetectron ops, reverted changes for NonZero and removed debug prints

* Fixed check for dynamism of a list, fixed value propagation for Concat op and remove redundant shape infer for reshape

* Update Eltwise value propagation to use np.ma

* Fixed ExpandDims shape infer function

* Shape infer functions fixes and improvements

* Remove Accum op from the MO

* Updated activation functions shape infer

* Removed unsupported operation Correlation

* Fixed shape infers for several functions

* Removed unsupported DataAugmentation operation

* Fixed shape infer functions for several ops in extensions directory

* Removed not-support operation PowerFile

* Removed unsupported SpatialTransformer,SimplerNMS and PredictionHeatmap operations

* More shape infer functions updates

* Merge shape infer fix

* Fixed typo

* Fixed TensorArraySize shape infer function

* Fixed VariadicSplit and Squeeze shape infer

* Fixed ONNX models Parameter extractor

* Updated Select value propagation for the dynamic case

* Fixed ReorgYolo shape infer and test

* Removed unnecessary tests

* Fixed Tile shape infer

* Fixed SparseFillEmptryRows unit tests

* Fixed package bom

* Added extractor for the TF operation Mod

* Fixed value propagation for MatMul operation

* Updated Parameter extender to generate shape_array when shape is partially defined only

* Fixed BOM file

* Fixed issue with the TF OD API models and DetectionOutput op. Now the shape infer function for the DO do not re-infer "num_classes" attribute value if it is already known

* Fixed unit test for the DO infer

* Fixed num classes calculation for the DO generation for Faster/Mask-RCNN models

* Changed NMS op to produce static output shape

* Restore dynamic output shape calculation for the NMS for NMS-5

* Fixed CellNormalizer transformation. It should work for static shapes only

* RNNCell Op class fixes

* Revert some changes

* Updated documentation with a list of supported operations

* Revert changes

* Fixes for the ConstantFill op

* Removed redundant SequenceLengthToMask transformation

* TensorArray* ops shape infer code style and refactoring

* Reverse some unnecessary changes in the ConvolutionNormalizer

* Fixes and unit tests for shape_array, compare_shapes, is_fully_defined functions

* Implemented shape_insert, shape_delete functions and tests for them

* Modified code to use shape_delete function

* Added usage of shape_insert function where necessary

* Use shape_insert function in many places

* Some fixes in shape inference for various ops

* Updated shape_delete function to support negative indices

* Changes and unit tests for the MatMul infer function

* Removed strange code from the TF Merge infer function

* Merge op shape infer fixes

* Fixed value propagation in the transformation EltwiseInputReshape.py for the dynamic dimension case

* Code cleanup

* Updated GatherND to support dynamic dimensions

* Minor fixes

* Fixed shape_insert and shape_delete to support np.int64 and np.int32 types

* Updated Upsample operation unit tests with dynamic input shapes

* Minor change in the extensions/back/ConvolutionNormalizer.py to make sure that input dimensions are static

* Fixed ConvertGroupedStridedSlice transformation and added unit tests

* Revert debug changes

* Fixed value propagation for Unsqueeze to work with partially defined input values

* Typo fix

* Added unit tests for the Unsqueeze op shape infer

* broadcasting functions changes and unit tests

* Fixed Tile value inference for partially defined input tensor

* Unit tests for Split and VariadicSplit ops

* Fixes for the Concat infer + unit tests

* Removed redundant tf_pack shape infer

* Fixed Concat value infer and added unit tests

* Fixed StridedSlice shape inference for case with dynamic slices

* Fixes related to StridedSlice shape infer, changes in tests

* Unit tests for the eltwise shape and value infer

* Fixed Pad op value propagation to allow dynamic input values to be propagated

* Unit test for Pooling dynamic input shape infer

* Squeeze op unit tests for dynamic input shape

* Added assert to the Squeeze op shape infer for case when squeeze dimension is dynamic value

* Added message to the MO when input shapes are dynamic

* Convolution dynamic unit test

* Removed redundant transformation GroupedConvWeightsNormalize

* Removed non-ascii character from the message

* Fixed typo in the BOM file

* Code style and comment fixes

* Fixed copy-paste issue in the DO shape infer function

* Fixed setting dynamic shape in the MO command line

* Added function to compare tensor with dynamic values. Fixes in the unit tests and shape infer functions

* Improved Reshape shape infer + added unit tests

* Fixed value propagation for Select op

* Renamed several internal functions, minor code fixes.

* Code style fixes

* Modified condition in the _set_shape method of the Port class to not check shape if the "override_output_shape" attribute is specified

* Fixed constant value propagation for ReduceOps when inputs have dynamic values. Added unit test

* Fixed shape infer for the Loop for dynamic dimensions case

* Fix in the NMS shape infer to avoid ragged numpy array generation. Fixed Scatter shape infer validation

* Improved shapes infer for eltwise ops with respect to dynamic dimensions

* Changed code comments

* Renamed tensor names in the ClipByValueTFTransformation

* Changed np.ma.allequal to strict_compare_tensors in the Merge op infer

* Chanded np.ma.allequal with strict_compare_tensor.

* Fixed Merge op value infer

* Fixed debug code

* Removed commented line

* Updated condition to check for dynamic shapes in the Partial infer to not fail for MxNet models

* Improvements to the get_shape_from_slice and is_dynamic_slice functions

* Reverted change in the `normalize_slices_attr` for ellipsis mask case

* Updated shape conditions in the ScatterNDBase op to support dynamic dimensions

* Crop op file refactoring

* Set "type" attribute to None for SparseFillEmptyRows op which is not from any opset

* Removed unnecessary extractor test

* Restored Crop operation type

* Removed "type" attribute from the Crop operation and updated the MO code to find Crop by "op" attribute

* Fixed If shape infer function to produce dynamic dimensions

* Updated If shape and value infer to properly work when condition is static

* Fixed fusing transformation check to work with dynamic dimensions. Change comparison in the shape_inference function to not use strict shapes comparison

* Optimize imports in the LayerNorm

* ConvertGroupedStridedSlice minor fixes related to dynamism support

* Fixed ConvertGroupedStridedSlice to properly check if the dimension is sliced
2021-09-01 14:35:06 +03:00

128 lines
4.7 KiB
Python

# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.common.partial_infer.utils import int64_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class FFTBase(Op):
enabled = False
op = None
version = 'opset7'
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'out_ports_count': 1,
'in_ports_count': 3,
'version': self.version,
'infer': self.infer
}
super().__init__(graph, mandatory_props, attrs)
def infer(self, node: Node):
node_name = node.soft_get(node.name, node.id)
assert len([p for p in node.in_ports().values() if not p.disconnected()]) in [2, 3], \
'(I)DFT node {} must have 2 or 3 inputs'.format(node_name)
src_shape = node.in_port(0).data.get_shape()
assert src_shape is not None, 'The input data shape of (I)DFT node {} must not be None'.format(node_name)
assert src_shape[-1] == 2, \
'The last dimension of input shape of (I)DFT node {} should be equal to 2'.format(node_name)
input_rank = len(src_shape)
assert input_rank >= 2, 'The input rank of (I)DFT node {} should be greater or equal to 2'.format(node_name)
axes = FFTBase.get_axes(node)
assert input_rank >= len(axes) + 1, \
'The input rank must be greater than number of (I)DFT node {} axes'.format(node_name)
axes = FFTBase.canonicalize_axes(axes, input_rank)
assert (input_rank - 1) not in axes, '(I)DFT node {} axes cannot contain the last axis'.format(node_name)
assert len(set(axes)) == len(axes), '(I)DFT node {} axes must be unique.'.format(node_name)
output_shape = src_shape.copy()
if node.is_in_port_connected(2):
signal_size = FFTBase.get_signal_size(node)
signal_size = FFTBase.canonicalize_signal_size(signal_size, axes, src_shape)
output_shape[axes] = signal_size
node.out_port(0).data.set_shape(output_shape)
@staticmethod
def canonicalize_axes(axes, input_rank):
"""
FFT operation supports for negative axes to transform. More precisely, according to the FFT operation
specification, axes should be integers from -(r - 1) to (r - 2) inclusively, where r = rank(data).
A negative axis 'a' is interpreted as an axis 'r - 1 + a'. The reason is the following: real input
tensor of the shape [n_0, ..., n_{r - 1}, 2] is interpreted as a complex tensor with the shape
[n_0, ..., n_{r - 1}]. Hence, we need to 'canonicalize' axes using the formula 'r - 1 + a'.
:param axes: axes to canonicalize
:param input_rank: input tensor rank
:return: canonicalized axes
"""
result = axes.copy()
for i, axis in enumerate(axes):
if axis < 0:
result[i] = axis + input_rank - 1
return result
@staticmethod
def canonicalize_signal_size(signal_size, axes, input_shape):
result = signal_size.copy()
for i, axis in enumerate(axes):
size = signal_size[i]
if size == -1:
result[i] = input_shape[axis]
return result
@staticmethod
def get_axes(node: Node):
axes = node.in_port(1).get_source().data.get_value()
node_name = node.soft_get('name', node.id)
assert axes is not None, 'The input with axes is not constant for node {}'.format(node_name)
return int64_array(axes)
@staticmethod
def get_signal_size(node: Node):
src_shape = node.in_port(0).data.get_shape()
assert src_shape is not None
input_rank = len(src_shape)
if node.is_in_port_connected(2):
signal_size = node.in_port(2).get_source().data.get_value()
else:
axes = FFTBase.get_axes(node)
signal_size = [src_shape[: input_rank - 1][a] for a in axes]
node_name = node.soft_get('name', node.id)
assert signal_size is not None, 'The input with signal_size is not constant for node {}'.format(node_name)
return int64_array(signal_size)
class DFT(FFTBase):
op = 'DFT'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': self.op,
'op': self.op,
}
mandatory_props.update(attrs)
super().__init__(graph, mandatory_props)
class IDFT(FFTBase):
op = 'IDFT'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': self.op,
'op': self.op,
}
mandatory_props.update(attrs)
super().__init__(graph, mandatory_props)