MO dynamic shapes support (#5918)

* Allow MO to generate IR with -1 in dimensions

* Some fixes to support -1 for StridedSlice operation

* Updated TensorArrayGatherV3 shape infer to support dynamic output shape

* Several fixes to support undefined dimensions in the Broadcast,Reshape,Slice and Tile

* Fixed bug in the normalization transformation of TF NMS to opset NMS

* Updated shape infer functions related to StridedSlice and NMS

* Updated Select shape inference function to use common shape broadcasting function supporting dynamism

* Fixed operation TFResize shape infer function to work correctly for case when model is converted with --disable_nhwc_to_nchw

* Dynamic Range and update asserts in NMS

* Changed the way how dynamic dimensions are specified. Refactored shape inference functions and common places to use new approach

* More fixes to support dynamic shapes

* More fixes for support of dynamic shapes

* Fixed generation of IR with dynamic dimensions

* Allow reading IRs with undefined dimensions

* More changes in the IE to support dynamic dimensions

* Fixes for Switch, Merge, Concat shape and value infer related to dynamism

* Fixed TensorArray related ops to properly handle dynamic dimensions. Fixed StridedSlice infer for case with new_axis

* Fixed shape_for_layout function to generate masked array

* Fixed shape inference for Convolution and Poolings to support dynamic spatial dimensions

* Updated shape infer functions for CTCGreedyDecotder, CTCLoss and Enter

* Fixed shape inference with dynamic dimensions for MatMul, Split, Upsample, SpaceToBatch, some fixes for the TI

* Fixes for undefined dimensions support for Proposal and DetectionOutput

* Fixed ExtractImagePatches, DepthToSpace and RegionYolo shape infer functions to work with partially dynamic dimensions

* Changes in tf_window_op_pad_infer to better work with dynamic dimensions

* Fixed output shape calculation for StridedSlice operation

* More StridedSlice fixes

* Fixed resolve_convolution_with_group

* Fixed unit tests

* Fixed unit tests

* Fixed Switch op unit tests

* Fixed shape inference for Upsample operation

* Updated unit tests for the Concat operation

* Fixed eltwise shape infer unit tests

* Fixed shape infer tests for Convolution and DetectionOutput ops

* Fixed Crop shape infer function tests

* Fixed Slice op unit test and minor fix in the shape inference. Fixed emitter

* Updated unit test for telemetry and match_shape function for dynamism

* Fixed unit test for the DetectionOutput

* Added support for the TF ClipByValue operation

* Fixed GatherND shape inference for dynamic shapes support

* Dynamic shapes support for the MO IR Reader

* Fixed BlockLSTM operation to not work as an extractor

* Allow to serialize IRs with partially defined shapes

* Updated SelectBroadcast transformation to not check shape values

* Fixed MO IR comparator

* Fixed SS value propagation when slices are dynamic

* Do not re-run graph clean-up for ProposalMutation

* Fixed InterpolateSequenceToInterpolate transformation to support dynamic dimensions

* Fixed Loop iteration count calculation and reading IteratorGetNext shapes

* Fixed unit test for serialization

* Fixed serialization test

* Fixed RandomUniform shape infer

* Fixed several transformations related to RNN to respect dynamic output shapes

* Fixed Deconvolutin shape calculation for dynamic batch. Eltwise shape infer improvements

* Fixed shape infer functions for ExperimentalDetectron ops, reverted changes for NonZero and removed debug prints

* Fixed check for dynamism of a list, fixed value propagation for Concat op and remove redundant shape infer for reshape

* Update Eltwise value propagation to use np.ma

* Fixed ExpandDims shape infer function

* Shape infer functions fixes and improvements

* Remove Accum op from the MO

* Updated activation functions shape infer

* Removed unsupported operation Correlation

* Fixed shape infers for several functions

* Removed unsupported DataAugmentation operation

* Fixed shape infer functions for several ops in extensions directory

* Removed not-support operation PowerFile

* Removed unsupported SpatialTransformer,SimplerNMS and PredictionHeatmap operations

* More shape infer functions updates

* Merge shape infer fix

* Fixed typo

* Fixed TensorArraySize shape infer function

* Fixed VariadicSplit and Squeeze shape infer

* Fixed ONNX models Parameter extractor

* Updated Select value propagation for the dynamic case

* Fixed ReorgYolo shape infer and test

* Removed unnecessary tests

* Fixed Tile shape infer

* Fixed SparseFillEmptryRows unit tests

* Fixed package bom

* Added extractor for the TF operation Mod

* Fixed value propagation for MatMul operation

* Updated Parameter extender to generate shape_array when shape is partially defined only

* Fixed BOM file

* Fixed issue with the TF OD API models and DetectionOutput op. Now the shape infer function for the DO do not re-infer "num_classes" attribute value if it is already known

* Fixed unit test for the DO infer

* Fixed num classes calculation for the DO generation for Faster/Mask-RCNN models

* Changed NMS op to produce static output shape

* Restore dynamic output shape calculation for the NMS for NMS-5

* Fixed CellNormalizer transformation. It should work for static shapes only

* RNNCell Op class fixes

* Revert some changes

* Updated documentation with a list of supported operations

* Revert changes

* Fixes for the ConstantFill op

* Removed redundant SequenceLengthToMask transformation

* TensorArray* ops shape infer code style and refactoring

* Reverse some unnecessary changes in the ConvolutionNormalizer

* Fixes and unit tests for shape_array, compare_shapes, is_fully_defined functions

* Implemented shape_insert, shape_delete functions and tests for them

* Modified code to use shape_delete function

* Added usage of shape_insert function where necessary

* Use shape_insert function in many places

* Some fixes in shape inference for various ops

* Updated shape_delete function to support negative indices

* Changes and unit tests for the MatMul infer function

* Removed strange code from the TF Merge infer function

* Merge op shape infer fixes

* Fixed value propagation in the transformation EltwiseInputReshape.py for the dynamic dimension case

* Code cleanup

* Updated GatherND to support dynamic dimensions

* Minor fixes

* Fixed shape_insert and shape_delete to support np.int64 and np.int32 types

* Updated Upsample operation unit tests with dynamic input shapes

* Minor change in the extensions/back/ConvolutionNormalizer.py to make sure that input dimensions are static

* Fixed ConvertGroupedStridedSlice transformation and added unit tests

* Revert debug changes

* Fixed value propagation for Unsqueeze to work with partially defined input values

* Typo fix

* Added unit tests for the Unsqueeze op shape infer

* broadcasting functions changes and unit tests

* Fixed Tile value inference for partially defined input tensor

* Unit tests for Split and VariadicSplit ops

* Fixes for the Concat infer + unit tests

* Removed redundant tf_pack shape infer

* Fixed Concat value infer and added unit tests

* Fixed StridedSlice shape inference for case with dynamic slices

* Fixes related to StridedSlice shape infer, changes in tests

* Unit tests for the eltwise shape and value infer

* Fixed Pad op value propagation to allow dynamic input values to be propagated

* Unit test for Pooling dynamic input shape infer

* Squeeze op unit tests for dynamic input shape

* Added assert to the Squeeze op shape infer for case when squeeze dimension is dynamic value

* Added message to the MO when input shapes are dynamic

* Convolution dynamic unit test

* Removed redundant transformation GroupedConvWeightsNormalize

* Removed non-ascii character from the message

* Fixed typo in the BOM file

* Code style and comment fixes

* Fixed copy-paste issue in the DO shape infer function

* Fixed setting dynamic shape in the MO command line

* Added function to compare tensor with dynamic values. Fixes in the unit tests and shape infer functions

* Improved Reshape shape infer + added unit tests

* Fixed value propagation for Select op

* Renamed several internal functions, minor code fixes.

* Code style fixes

* Modified condition in the _set_shape method of the Port class to not check shape if the "override_output_shape" attribute is specified

* Fixed constant value propagation for ReduceOps when inputs have dynamic values. Added unit test

* Fixed shape infer for the Loop for dynamic dimensions case

* Fix in the NMS shape infer to avoid ragged numpy array generation. Fixed Scatter shape infer validation

* Improved shapes infer for eltwise ops with respect to dynamic dimensions

* Changed code comments

* Renamed tensor names in the ClipByValueTFTransformation

* Changed np.ma.allequal to strict_compare_tensors in the Merge op infer

* Chanded np.ma.allequal with strict_compare_tensor.

* Fixed Merge op value infer

* Fixed debug code

* Removed commented line

* Updated condition to check for dynamic shapes in the Partial infer to not fail for MxNet models

* Improvements to the get_shape_from_slice and is_dynamic_slice functions

* Reverted change in the `normalize_slices_attr` for ellipsis mask case

* Updated shape conditions in the ScatterNDBase op to support dynamic dimensions

* Crop op file refactoring

* Set "type" attribute to None for SparseFillEmptyRows op which is not from any opset

* Removed unnecessary extractor test

* Restored Crop operation type

* Removed "type" attribute from the Crop operation and updated the MO code to find Crop by "op" attribute

* Fixed If shape infer function to produce dynamic dimensions

* Updated If shape and value infer to properly work when condition is static

* Fixed fusing transformation check to work with dynamic dimensions. Change comparison in the shape_inference function to not use strict shapes comparison

* Optimize imports in the LayerNorm

* ConvertGroupedStridedSlice minor fixes related to dynamism support

* Fixed ConvertGroupedStridedSlice to properly check if the dimension is sliced
This commit is contained in:
Evgeny Lazarev
2021-09-01 14:35:06 +03:00
committed by GitHub
parent 3081fac758
commit 3775dad345
240 changed files with 2669 additions and 3093 deletions

View File

@@ -179,6 +179,7 @@ Standard TensorFlow\* operations:
| BroadcastTo | No |
| Cast | No |
| Ceil | No |
| ClipByValue | No |
| Concat | No |
| ConcatV2 | No |
| Const | No |
@@ -253,6 +254,7 @@ Standard TensorFlow\* operations:
| Min | No |
| Minimum | No |
| MirrorPad | No |
| Mod | No |
| Mul | No |
| Neg | No |
| NextIteration | Supported only when it is fused to the TensorIterator layer |

View File

@@ -56,4 +56,4 @@ TEST_F(SerializationCleanupTest, SerializationShouldWorkWithDynamicFunction) {
// .xml & .bin files should be present
ASSERT_TRUE(std::ifstream(m_out_xml_path, std::ios::in).good());
ASSERT_TRUE(std::ifstream(m_out_bin_path, std::ios::in).good());
}
}

View File

@@ -18,14 +18,12 @@ extensions/back/ConvolutionNormalizer.py
extensions/back/CorrectName.py
extensions/back/CropToStridedSlice.py
extensions/back/CutMemory.py
extensions/back/disable_unsupported_ND_operations.py
extensions/back/EnableConstantStridedSlice.py
extensions/back/FakeOutputResolver.py
extensions/back/ForceStrictPrecision.py
extensions/back/fuse_sub_div_min.py
extensions/back/FuseTransposesSequence.py
extensions/back/GatherNormalizer.py
extensions/back/GroupedConvWeightsNormalize.py
extensions/back/insert_compatibility_l2normalization.py
extensions/back/InterpolateReshape.py
extensions/back/kaldi_remove_memory_output.py
@@ -75,7 +73,6 @@ extensions/front/AttributedRollToRoll.py
extensions/front/binary_quantize_normalization.py
extensions/front/broadcast_with_range.py
extensions/front/caffe/__init__.py
extensions/front/caffe/accum_ext.py
extensions/front/caffe/argmax_ext.py
extensions/front/caffe/ArgMaxFlatten.py
extensions/front/caffe/axpy.py
@@ -86,11 +83,9 @@ extensions/front/caffe/bn.py
extensions/front/caffe/bn_ext.py
extensions/front/caffe/concat_ext.py
extensions/front/caffe/conv_ext.py
extensions/front/caffe/correlation_ext.py
extensions/front/caffe/crop_ext.py
extensions/front/caffe/ctcgreedydecoder_ext.py
extensions/front/caffe/CustomLayersMapping.xml.example
extensions/front/caffe/data_augmentation_ext.py
extensions/front/caffe/detection_output.py
extensions/front/caffe/dropout_ext.py
extensions/front/caffe/elementwise_ext.py
@@ -107,7 +102,6 @@ extensions/front/caffe/MVNCaffeToMVN.py
extensions/front/caffe/normalize_ext.py
extensions/front/caffe/permute_ext.py
extensions/front/caffe/pooling_ext.py
extensions/front/caffe/power_file_ext.py
extensions/front/caffe/prelu_ext.py
extensions/front/caffe/priorbox_clustered_ext.py
extensions/front/caffe/priorbox_ext.py
@@ -124,11 +118,9 @@ extensions/front/caffe/roipooling_ext.py
extensions/front/caffe/scale_ext.py
extensions/front/caffe/shufflechannel_ext.py
extensions/front/caffe/sigmoid.py
extensions/front/caffe/simplernms_ext.py
extensions/front/caffe/slice_ext.py
extensions/front/caffe/slice_to_split.py
extensions/front/caffe/softmax_ext.py
extensions/front/caffe/spatial_transformer_ext.py
extensions/front/caffe/split_to_identity.py
extensions/front/caffe/tanh.py
extensions/front/ChangePlaceholderTypes.py
@@ -388,6 +380,8 @@ extensions/front/tf/broadcast_ext.py
extensions/front/tf/bucketize.py
extensions/front/tf/bucketize_ext.py
extensions/front/tf/Cast_ext.py
extensions/front/tf/ClipByValue_ext.py
extensions/front/tf/ClipByValueTFTransformation.py
extensions/front/tf/ComplexAbs.py
extensions/front/tf/ComplexAbsAfterComplex.py
extensions/front/tf/concat.py
@@ -632,7 +626,6 @@ extensions/middle/ReverseTransposeNormalization.py
extensions/middle/ReverseV2ToReverseSequence.py
extensions/middle/RNNSequenceNormalizeToIE.py
extensions/middle/ScaleInput.py
extensions/middle/SequenceLengthToMask.py
extensions/middle/SharedWeightsDuplication.py
extensions/middle/SliceConverter.py
extensions/middle/SliceLikeToStridedSlice.py
@@ -656,7 +649,6 @@ extensions/middle/UpsampleToResample.py
extensions/middle/UselessMerge.py
extensions/middle/UselessSplitEraser.py
extensions/ops/__init__.py
extensions/ops/accum.py
extensions/ops/activation_ops.py
extensions/ops/adaptive_avg_pooling.py
extensions/ops/argmax.py
@@ -671,15 +663,14 @@ extensions/ops/BN.py
extensions/ops/box_nms.py
extensions/ops/bucketize.py
extensions/ops/Cast.py
extensions/ops/ClipByValueTF.py
extensions/ops/constant_fill.py
extensions/ops/ConvertLike.py
extensions/ops/copyop.py
extensions/ops/correlation.py
extensions/ops/ctc_greedy_decoder.py
extensions/ops/ctc_greedy_decoder_seq_len.py
extensions/ops/ctc_loss.py
extensions/ops/cumsum.py
extensions/ops/data_augmentation.py
extensions/ops/depth_to_space.py
extensions/ops/dequantize_linear.py
extensions/ops/DetectionOutput.py
@@ -729,8 +720,6 @@ extensions/ops/ONNXResize11.py
extensions/ops/pack.py
extensions/ops/parameter.py
extensions/ops/pnorm.py
extensions/ops/power_file.py
extensions/ops/prediction_heatmap.py
extensions/ops/prelu.py
extensions/ops/priorbox.py
extensions/ops/priorbox_clustered.py
@@ -758,7 +747,6 @@ extensions/ops/scatter.py
extensions/ops/scatternd.py
extensions/ops/select.py
extensions/ops/shufflechannel.py
extensions/ops/simplernms.py
extensions/ops/size.py
extensions/ops/slice_like.py
extensions/ops/space_to_depth.py
@@ -767,7 +755,6 @@ extensions/ops/sparse_reshape.py
extensions/ops/sparse_segment_mean.py
extensions/ops/sparse_segment_sqrtn.py
extensions/ops/sparse_segment_sum.py
extensions/ops/spatial_transformer.py
extensions/ops/splice.py
extensions/ops/split.py
extensions/ops/stop_gradient.py
@@ -845,7 +832,6 @@ mo/front/common/partial_infer/eltwise.py
mo/front/common/partial_infer/multi_box_detection.py
mo/front/common/partial_infer/multi_box_prior.py
mo/front/common/partial_infer/random_uniform.py
mo/front/common/partial_infer/reshape.py
mo/front/common/partial_infer/roipooling.py
mo/front/common/partial_infer/utils.py
mo/front/common/register_custom_ops.py

View File

@@ -5,10 +5,9 @@ import numpy as np
from extensions.ops.split import VariadicSplit
from mo.back.replacement import BackReplacementPattern
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, is_fully_defined
from mo.front.tf.graph_utils import create_op_node_with_second_input, create_op_with_const_inputs
from mo.graph.graph import Graph
from mo.ops.const import Const
from mo.ops.reshape import Reshape
@@ -41,6 +40,7 @@ class CellNormalizer(BackReplacementPattern):
WR_shape = node.in_port(WR_input_id).data.get_shape()
assert WR_shape is not None, "Undefined 'WR' input shape for Cell node '{}'".format(cell_name)
assert is_fully_defined(WR_shape), 'Not fully defined shape for WR for Cell node "{}"'.format(cell_name)
num_elements_in_WR = np.prod(WR_shape)
input_size = (num_elements_in_WR / (hidden_size_coef * hidden_size)) - hidden_size

View File

@@ -6,7 +6,7 @@ import numpy as np
from extensions.back.ReshapeMutation import ReshapeMutation
from extensions.back.ReverseInputChannels import ApplyReverseChannels
from mo.back.replacement import BackReplacementPattern
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import shape_array, is_fully_defined, int64_array
from mo.front.tf.graph_utils import create_op_node_with_second_input, create_op_with_const_inputs
from mo.graph.graph import Graph, Node
from mo.ops.const import Const
@@ -24,18 +24,18 @@ def resolve_convolution_with_group(node: Node, group: int, ir_version: str):
assert len(weights_shape) in [3, 4, 5]
assert weights_shape[0] % group == 0
assert int64_array(node.output).ndim == 0
if ir_version == 'V7':
if weights_shape[0] == node.output:
# weights are already is in [G*O I X Y] format
return
new_shape = int64_array([node.output, -1, *weights_shape[2:]])
new_shape = shape_array([node.output, -1, *weights_shape[2:]])
elif ir_version == 'V10':
# TODO rewrite this transformation to generate a shape-computing sub-graph. Ticket 62076
I = input_shape[1]
new_shape = int64_array([group, node.output / group, I / group, *weights_shape[2:]])
assert np.prod(weights_shape) == np.prod(new_shape), \
'Initial weights shape {}, grouped weights shape {}'.format(weights_shape, new_shape)
new_shape = shape_array([group, node.output // group, I // group, *weights_shape[2:]])
assert is_fully_defined(weights_shape[2:]) and is_fully_defined(I) and \
np.prod(weights_shape) == np.prod(new_shape), 'Initial weights shape {}, grouped weights shape {}' \
''.format(weights_shape, new_shape)
del node['group']
node['type'] = 'GroupConvolution'
else:
@@ -244,12 +244,12 @@ class DeconvolutionNormalizer(BackReplacementPattern):
assert I % group == 0
assert node.output % group == 0
new_shape = int64_array([group, I / group, node.output / group, *weights_shape[2:]])
new_shape = shape_array([group, I // group, node.output // group, *weights_shape[2:]])
assert np.prod(weights_shape) == np.prod(new_shape), \
'Initial weights shape {}, grouped weights shape {}'.format(weights_shape, new_shape)
reshape = create_op_node_with_second_input(graph, Reshape, int64_array(new_shape),
{'override_output_shape': True},
assert not is_fully_defined(new_shape) or not is_fully_defined(weights_shape) or \
np.prod(weights_shape) == np.prod(new_shape), 'Initial weights shape {}, grouped weights shape {}' \
''.format(weights_shape, new_shape)
reshape = create_op_node_with_second_input(graph, Reshape, new_shape, {'override_output_shape': True},
node.in_port(1).get_source().node)
node.in_port(1).get_connection().set_source(reshape.out_port(0))

View File

@@ -24,7 +24,7 @@ class CropToStridedSlice(BackReplacementPattern):
def pattern():
return dict(
nodes=[
('crop', dict(type='Crop'))
('crop', dict(op='Crop'))
],
edges=[]
)

View File

@@ -5,7 +5,7 @@ from extensions.ops.elementwise import Add
from mo.back.replacement import BackReplacementPattern
from mo.front.common.partial_infer.utils import int64_array
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, rename_nodes, rename_node
from mo.graph.graph import Graph, rename_nodes
class FakeOutputResolver(BackReplacementPattern):

View File

@@ -1,38 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.back.replacement import BackReplacementPattern
from mo.front.common.partial_infer.utils import int64_array
from mo.graph.graph import Graph
from mo.ops.const import Const
class GroupedConvWeightsNormalize(BackReplacementPattern):
"""
This pass is a workaround for nGraph GroupedConvolution operation
It requires that weights layout will be next: G*O*I,1,H,W
"""
enabled = True
force_clean_up = True
def pattern(self):
return dict(
nodes=[
('conv', {'type': 'Convolution', 'group': lambda x: x != 1}),
('weights', {'type': 'Const', 'kind': 'op'}),
('weights_data', {'kind': 'data'}),
],
edges=[('weights', 'weights_data'), ('weights_data', 'conv')]
)
def replace_pattern(self, graph: Graph, match: dict):
conv = match['conv']
weights = match['weights']
input_shape = conv.in_port(0).data.get_shape()
new_weights_shape = int64_array([(weights.value.shape[0] * weights.value.shape[1]) / (input_shape[1] / conv.group), input_shape[1] / conv.group, *weights.value.shape[2:]])
new_weights = Const(graph, {'value': np.reshape(weights.value, new_weights_shape),
'name': weights.soft_get('name', weights.id) + '_new'}).create_node()
weights.out_port(0).get_connection().set_source(new_weights.out_port(0))
new_weights.infer(new_weights)

View File

@@ -17,7 +17,7 @@ from mo.ops.strided_slice import StridedSlice
class ProposalMutation(BackReplacementPattern):
enabled = True
force_clean_up = True
force_shape_inference = True
def run_before(self):
return [ReshapeMutation, StridedSliceMasksNormalizer]

View File

@@ -460,10 +460,6 @@ class ApplyReverseChannels(BackReplacementPattern):
run_not_recursively = True
force_clean_up = True
def run_before(self):
from extensions.back.GroupedConvWeightsNormalize import GroupedConvWeightsNormalize
return [GroupedConvWeightsNormalize]
def find_and_replace_pattern(self, graph: Graph):
"""
Following transformations should run in strict order, that is why we disabled them all and run here

View File

@@ -1,8 +1,6 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from extensions.back.ReshapeMutation import ReshapeMutation
from mo.back.replacement import BackReplacementPattern
from mo.front.common.partial_infer.utils import int64_array
@@ -40,9 +38,6 @@ class SelectBroadcast(BackReplacementPattern):
if select.has_valid('format') and select['format'] == 'tf':
condition = select.in_node(0)
input_1 = select.in_node(1)
input_2 = select.in_node(2)
assert np.array_equal(input_1.shape, input_2.shape)
if len(condition.shape) == 1 and len(input_1.shape) > 1:
unsqueeze_op = create_op_node_with_second_input(

View File

@@ -1,35 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.back.replacement import BackReplacementPattern
from mo.graph.graph import Node, Graph
from mo.utils.error import Error
class DisableUnsupportedNDOperations(BackReplacementPattern):
"""
This pass disables ND Convolutions/Deconvolutions/Poolings
"""
enabled = False
unsupported_operations = ['Convolution', 'Deconvolution', 'Pooling']
def find_and_replace_pattern(self, graph: Graph):
unsupported_nodes = []
for node in graph.nodes():
node = Node(graph, node)
if node.kind == 'op' and node.soft_get('type') in self.unsupported_operations:
input_shape = node.in_node(0).shape
if len(input_shape) > 4:
unsupported_nodes.append((node.id, node.type))
if len(unsupported_nodes) == 0:
return
error_message = "\nOperations below were marked as unsupported due to they expect more than two spatial dims" \
" (input shape length more than 4)\n"
error_message += "List of unsupported operations ({})\n".format(len(unsupported_nodes))
for node, type in unsupported_nodes:
error_message += " {} {}\n".format(type, node)
raise Error(error_message)

View File

@@ -124,8 +124,6 @@ class OpVersioning(BackReplacementPattern):
]))
opset_1_experimental_ops = set(map(lambda s: s.lower(), [
"SimplerNMS",
"SpatialTransformer",
"ExperimentalDetectronGenerateProposalsSingleImage",
"ExperimentalDetectronTopKROIs",
"ExperimentalDetectronROIFeatureExtractor",

View File

@@ -3,10 +3,10 @@
import logging as log
from extensions.ops.mvn import MVN
from mo.front.common.replacement import FrontReplacementPattern
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, rename_nodes
from extensions.ops.mvn import MVN
from mo.middle.pattern_match import apply_pattern

View File

@@ -1,21 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.accum import AccumOp
from mo.front.caffe.collect_attributes import collect_attributes
from mo.front.extractor import FrontExtractorOp
class AccumFrontExtractor(FrontExtractorOp):
op = 'Accum'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.accum_param
attrs = collect_attributes(param)
# update the attributes of the node
AccumOp.update_node_stat(node, attrs)
return cls.enabled

View File

@@ -1,40 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.correlation import CorrelationOp
from mo.front.caffe.collect_attributes import merge_attrs
from mo.front.common.extractors.utils import layout_attrs
from mo.front.extractor import FrontExtractorOp
class CorrelationFrontExtractor(FrontExtractorOp):
op = 'Correlation'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.correlation_param
corr_type = 'caffe.CorrelationParameter.MULTIPLY'
if param.correlation_type == 1:
corr_type = 'caffe.CorrelationParameter.SUBTRACT'
update_attrs = {
'pad': param.pad,
'kernel_size': param.kernel_size,
'max_displacement': param.max_displacement,
'stride_1': param.stride_1,
'stride_2': param.stride_2,
'single_direction': param.single_direction,
'do_abs': int(param.do_abs),
'correlation_type': corr_type,
}
mapping_rule = merge_attrs(param, update_attrs)
mapping_rule.update(layout_attrs())
# update the attributes of the node
CorrelationOp.update_node_stat(node, mapping_rule)
return cls.enabled

View File

@@ -15,7 +15,6 @@ class CropFrontExtractor(FrontExtractorOp):
proto_layer = node.pb
param = proto_layer.crop_param
mapping_rule = {
'type': 'Crop',
'axis': param.axis,
'offset': param.offset,
'dim': None, # set in infer

View File

@@ -1,45 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.data_augmentation import DataAugmentationOp
from mo.front.caffe.collect_attributes import merge_attrs
from mo.front.caffe.extractors.utils import embed_input
from mo.front.extractor import FrontExtractorOp
class DataAugmentationFrontExtractor(FrontExtractorOp):
op = 'DataAugmentation'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.augmentation_param
# slice_dim is deprecated parameter and is used as alias for axis
# however if slice_dim is defined and axis is default, we use slice_dim
update_attrs = {
'crop_width': param.crop_width,
'crop_height': param.crop_height,
'write_augmented': param.write_augmented,
'max_multiplier': param.max_multiplier,
'augment_during_test': int(param.augment_during_test),
'recompute_mean': param.recompute_mean,
'write_mean': param.write_mean,
'mean_per_pixel': int(param.mean_per_pixel),
'mean': param.mean,
'mode': param.mode,
'bottomwidth': param.bottomwidth,
'bottomheight': param.bottomheight,
'num': param.num,
'chromatic_eigvec': param.chromatic_eigvec
}
mapping_rule = merge_attrs(param, update_attrs)
if node.model_pb:
for index in range(0, len(node.model_pb.blobs)):
embed_input(mapping_rule, index + 1, 'custom_{}'.format(index), node.model_pb.blobs[index].data)
# update the attributes of the node
DataAugmentationOp.update_node_stat(node, mapping_rule)
return cls.enabled

View File

@@ -1,22 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.power_file import PowerFileOp
from mo.front.caffe.collect_attributes import collect_attributes
from mo.front.extractor import FrontExtractorOp
class PowerFileFrontExtractor(FrontExtractorOp):
op = 'PowerFile'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.power_file_param
attrs = collect_attributes(param)
# update the attributes of the node
PowerFileOp.update_node_stat(node, attrs)
return cls.enabled

View File

@@ -1,32 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.simplernms import SimplerNMSOp
from mo.front.caffe.collect_attributes import merge_attrs
from mo.front.extractor import FrontExtractorOp
class SimplerNMSFrontExtractor(FrontExtractorOp):
op = 'SimplerNMS'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.simpler_nms_param
update_attrs = {
'cls_threshold': param.cls_threshold,
'max_num_proposals': param.max_num_proposals,
'iou_threshold': param.iou_threshold,
'min_bbox_size': param.min_bbox_size,
'feat_stride': param.feat_stride,
'pre_nms_topn': param.pre_nms_topn,
'post_nms_topn': param.post_nms_topn,
'scale': param.scale,
}
mapping_rule = merge_attrs(param, update_attrs)
# update the attributes of the node
SimplerNMSOp.update_node_stat(node, mapping_rule)
return cls.enabled

View File

@@ -1,36 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.spatial_transformer import SpatialTransformOp
from mo.front.caffe.collect_attributes import merge_attrs
from mo.front.extractor import FrontExtractorOp
class SpatialTransformFrontExtractor(FrontExtractorOp):
op = 'SpatialTransformer'
enabled = True
@classmethod
def extract(cls, node):
proto_layer = node.pb
param = proto_layer.st_param
update_attrs = {
'transform_type': param.transform_type,
'sampler_type': param.sampler_type,
'output_H': param.output_H,
'output_W': param.output_W,
'to_compute_dU': int(param.to_compute_dU),
'theta_1_1': param.theta_1_1,
'theta_1_2': param.theta_1_2,
'theta_1_3': param.theta_1_3,
'theta_2_1': param.theta_2_1,
'theta_2_2': param.theta_2_2,
'theta_2_3': param.theta_2_3
}
mapping_rule = merge_attrs(param, update_attrs)
# update the attributes of the node
SpatialTransformOp.update_node_stat(node, mapping_rule)
return cls.enabled

View File

@@ -1,14 +1,11 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging as log
from extensions.ops.BatchNormInference import BatchNormInference
from mo.front.extractor import FrontExtractorOp
from mo.front.onnx.extractors.utils import onnx_attr
class BatchNormalizationExtractor(FrontExtractorOp):
op = 'BatchNormalization'
enabled = True

View File

@@ -1,10 +1,10 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from onnx.mapping import TENSOR_TYPE_TO_NP_TYPE
from extensions.ops.parameter import Parameter
from mo.front.common.partial_infer.utils import shape_array, dynamic_dimension_value
from mo.front.extractor import FrontExtractorOp
@@ -16,7 +16,8 @@ class PlaceholderFrontExtractor(FrontExtractorOp):
def extract(cls, node):
t_type = node.pb.type.tensor_type
attrs = {
'shape': np.array([d.dim_value for d in t_type.shape.dim], dtype=np.int64),
'shape': shape_array([d.dim_value if (not hasattr(d, 'dim_param') or d.dim_param == '') and d.dim_value != 0
else dynamic_dimension_value for d in t_type.shape.dim]),
'data_type': TENSOR_TYPE_TO_NP_TYPE[t_type.elem_type]
}
Parameter.update_node_stat(node, attrs)

View File

@@ -0,0 +1,27 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.elementwise import Minimum, Maximum
from mo.front.common.replacement import FrontReplacementSubgraph
from mo.graph.graph import Graph, rename_nodes
class ClipByValueTFTransformation(FrontReplacementSubgraph):
"""
The transformation replaces the ClipByValueTF operation which works as Clamp but supports broadcasting of inputs
with Minimum and Maximum.
"""
enabled = True
def find_and_replace_pattern(self, graph: Graph):
for cbv in graph.get_op_nodes(op='ClipByValueTF'):
cbv_name = cbv.soft_get('name', cbv.id)
minimum = Minimum(graph, {'name': cbv_name + '/CLipMinimum'}).create_node()
maximum = Maximum(graph, {'name': cbv_name + '/CLipMaximum'}).create_node()
minimum.in_port(0).connect(cbv.in_port(0).get_source())
minimum.in_port(1).connect(cbv.in_port(2).get_source())
maximum.in_port(0).connect(minimum.out_port(0))
maximum.in_port(1).connect(cbv.in_port(1).get_source())
cbv.out_port(0).get_connection().set_source(maximum.out_port(0))
rename_nodes([(cbv, cbv_name + '/TBR'), (maximum, cbv_name)])
graph.remove_node(cbv.id)

View File

@@ -0,0 +1,15 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.extractor import FrontExtractorOp
from extensions.ops.ClipByValueTF import ClibByValueTF
class ClipByValueExtractor(FrontExtractorOp):
op = 'ClipByValue'
enabled = True
@classmethod
def extract(cls, node):
ClibByValueTF.update_node_stat(node, {})
return cls.enabled

View File

@@ -1,9 +1,8 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.common.partial_infer.utils import int64_array
from mo.front.extractor import FrontExtractorOp
from mo.front.tf.extractors.utils import tf_dtype_extractor
from mo.front.tf.extractors.utils import tf_dtype_extractor, tf_tensor_shape
from mo.ops.op import Op
@@ -20,7 +19,6 @@ class IteratorGetNextExtractor(FrontExtractorOp):
extracted_types.append(tf_dtype_extractor(t))
result_shapes = []
for shape_pb in shapes:
shape = shape_pb.dim
result_shapes.append(int64_array([dim.size for dim in shape]))
result_shapes.append(tf_tensor_shape(shape_pb))
Op.update_node_stat(node, {'shapes': result_shapes, 'types': extracted_types})
return cls.enabled

View File

@@ -948,7 +948,7 @@ class ObjectDetectionAPIDetectionOutputReplacement(FrontReplacementFromConfigFil
background_label_id=background_label_id,
code_type='caffe.PriorBoxParameter.CENTER_SIZE', pad_mode='caffe.ResizeParameter.CONSTANT',
resize_mode='caffe.ResizeParameter.WARP',
num_classes=num_classes,
num_classes=num_classes + 1,
confidence_threshold=_value_or_raise(match, pipeline_config, 'postprocessing_score_threshold'),
top_k=_value_or_raise(match, pipeline_config, 'postprocessing_max_detections_per_class'),
keep_top_k=_value_or_raise(match, pipeline_config, 'postprocessing_max_total_detections'),
@@ -1475,6 +1475,7 @@ class ObjectDetectionAPISSDPostprocessorReplacement(FrontReplacementFromConfigFi
detection_output_node = detection_output_op.create_node(
[reshape_loc_node, reshape_conf_node, priors_node],
dict(name=detection_output_op.attrs['type'],
num_classes=num_classes,
confidence_threshold=_value_or_raise(match, pipeline_config, 'postprocessing_score_threshold'),
top_k=_value_or_raise(match, pipeline_config, 'postprocessing_max_detections_per_class'),
keep_top_k=_value_or_raise(match, pipeline_config, 'postprocessing_max_total_detections'),

View File

@@ -8,7 +8,6 @@ import numpy as np
from extensions.ops.Cast import Cast
from extensions.ops.elementwise import Div
from extensions.ops.interpolate import Interpolate
from mo.front.common.layout import get_height_dim, get_width_dim
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.replacement import FrontReplacementOp
from mo.front.tf.graph_utils import create_op_with_const_inputs
@@ -32,13 +31,9 @@ def replace_tf_resize(graph: Graph, resize: Node, interpolation_mode: str):
shape = Shape(graph, {'name': resize_name + '/shapeof'}).create_node()
layout = graph.graph['layout']
height_dim = get_height_dim(layout, 4)
width_dim = get_width_dim(layout, 4)
ss = create_op_with_const_inputs(graph, StridedSlice,
{1: int64_array([height_dim]),
2: int64_array([width_dim + 1]),
{1: int64_array([1]),
2: int64_array([3]),
3: int64_array([1])
},
{'name': resize_name + '/StridedSlice',
@@ -74,7 +69,7 @@ def replace_tf_resize(graph: Graph, resize: Node, interpolation_mode: str):
interpolate4 = create_op_with_const_inputs(graph, Interpolate,
{
3: int64_array([height_dim, width_dim])
3: int64_array([1, 2])
},
{
'name': resize_name + '/interpolate_4',

View File

@@ -7,7 +7,7 @@ from mo.front.tf.extractors.utils import tf_tensor_shape
from mo.graph.graph import Node
class TensorArrayGatherV3Exteractor(FrontExtractorOp):
class TensorArrayGatherV3Extractor(FrontExtractorOp):
op = "TensorArrayGatherV3"
enabled = True

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
from extensions.ops.elementwise import Add, Mul, Sub, Div, Maximum, Minimum, Pow, LogicalAnd, LogicalOr, Equal, \
GreaterEqual, Greater, Less, LessEqual, NotEqual, FloorMod, BiasAdd, SquaredDifference, Round
GreaterEqual, Greater, Less, LessEqual, NotEqual, FloorMod, BiasAdd, SquaredDifference, Round, Mod
from mo.front.extractor import FrontExtractorOp
from mo.front.tf.extractors.utils import tf_dtype_extractor
from mo.ops.eltwise_n import EltwiseNAdd
@@ -70,6 +70,16 @@ class SubExtractor(FrontExtractorOp):
return cls.enabled
class ModExtractor(FrontExtractorOp):
op = 'Mod'
enabled = True
@classmethod
def extract(cls, node):
Mod.update_node_stat(node, {'data_type': tf_dtype_extractor(node.pb.attr["T"].type)})
return cls.enabled
class DivExtractor(FrontExtractorOp):
op = 'RealDiv'
enabled = True

View File

@@ -7,13 +7,13 @@ from mo.front.common.partial_infer.utils import int64_array
from mo.front.extractor import FrontExtractorOp
from mo.front.tf.extractors.utils import tf_int_list
class ExtractImagePatchesExtractor(FrontExtractorOp):
op = 'ExtractImagePatches'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'spatial_dims': int64_array([1, 2]),
'sizes': tf_int_list(node.pb.attr['ksizes'].list),
@@ -23,3 +23,4 @@ class ExtractImagePatchesExtractor(FrontExtractorOp):
}
ExtractImagePatches.update_node_stat(node, attrs)
return cls.enabled

View File

@@ -62,7 +62,7 @@ class TFNonMaxSuppressionNormalize(FrontReplacementSubgraph):
num_of_outputs = len([port for port in nms.out_ports().values() if not port.disconnected()])
if num_of_outputs == 1:
return
continue
# prepare output #1
crop_score_indices_name = nms_name + '/Crop_scores_'

View File

@@ -7,6 +7,7 @@ import numpy as np
from extensions.ops.elementwise import Add, Mul
from mo.front.common.layout import get_features_dim
from mo.front.common.partial_infer.utils import compatible_dims
from mo.front.extractor import get_node_id_with_ports
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, Node
@@ -42,7 +43,7 @@ class AddMeanScaleValues(MiddleReplacementPattern):
return
assert input_node.has_valid('shape')
features_dim_idx = get_features_dim(graph.graph['layout'], len(input_node.shape))
assert value.size == input_node.shape[features_dim_idx] or value.size == 1
assert compatible_dims(value.size, input_node.shape[features_dim_idx]) or value.size == 1
shape = np.ones(len(input_node.shape), dtype=np.int64)
shape[features_dim_idx] = value.size

View File

@@ -10,7 +10,7 @@ from extensions.middle.InsertLayoutPropagationTransposes import is_input_data_in
is_output_data_in_correct_layout
from extensions.middle.LayoutChangeForConstantShapePaths import LayoutChangeForConstantShapePaths
from extensions.middle.pass_separator import PostMiddleStart
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_array
from mo.graph.graph import Graph, Node
from mo.graph.perm_inputs import get_node_with_permutation
from mo.graph.port import Port
@@ -88,8 +88,7 @@ class ApplyPermutation(MiddleReplacementPattern):
all([attrs.get('input_permutation', False) for u, v, attrs in graph.out_edges(node.id, data=True)]):
continue
if len(
node.in_nodes()) != 0: # there are data nodes without input operation node inside the tensor iterator
if len(node.in_nodes()) != 0: # there are data nodes without input operation node inside the TensorIterator
edge_attrs = graph.get_edge_data(node.in_node(0).id, node.id)[0]
if is_output_data_in_correct_layout(node.in_node(0), edge_attrs['out']):
log.debug('Do not permute data node attrs for node "{}" output port "{}"'.format(node.in_node(0).id,
@@ -99,7 +98,7 @@ class ApplyPermutation(MiddleReplacementPattern):
# Apply permutation for shape and value if exists
if len(node.permutation.perm) == 0:
continue
node.shape = np.array(node.shape)[node.permutation.perm]
node.shape = shape_array(node.shape)[node.permutation.perm]
if node.has_valid('value'):
assert len(node.value.shape) == len(node.permutation.perm), \
'Node {} has shape {} and permutation {} that does not match. Their lengths should be equal' \

View File

@@ -3,13 +3,12 @@
import logging as log
from copy import deepcopy
from typing import Callable
import numpy as np
from extensions.middle.SliceConverter import ConvertSlice
from extensions.ops.split import VariadicSplit
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_array
from mo.graph.graph import Graph, Node, add_opoutput
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
@@ -70,16 +69,27 @@ class ConvertGroupedStridedSlice(MiddleReplacementPattern):
if input_data.value is not None:
continue
input_shape = np.array(input_data.shape)
if input_data.shape is None:
continue
input_shape = shape_array(input_data.shape)
# Get all unique StridedSlice consumers
out_nodes = [node for node in input_data.out_nodes() if node.op == 'StridedSlice' and node.in_node(0).name == input_data.name]
sorted_out_nodes = sorted(out_nodes, key=lambda n: list(n.slices))
out_nodes = unique_by(sorted_out_nodes, strided_slices_equality)
out_nodes = [node for node in input_data.out_nodes() if node.op == 'StridedSlice' and
node.in_node(0).id == input_data.id]
if len(out_nodes) <= 1:
continue
valid_for_replacement = True
for n in out_nodes:
if any(not isinstance(s, slice) for s in n.slices):
# this is a slice with dynamic dimension. Such operation is not valid for replacement
valid_for_replacement = False
if not valid_for_replacement:
continue
sorted_out_nodes = sorted(out_nodes, key=lambda n: list(n.slices))
out_nodes = unique_by(sorted_out_nodes, strided_slices_equality)
for node in out_nodes:
if len(node.slices) != len(out_nodes[0].slices):
@@ -89,7 +99,8 @@ class ConvertGroupedStridedSlice(MiddleReplacementPattern):
split_channel_dim = None
for dim_id, s in enumerate(out_nodes[0].slices):
l, r, stride = s.start, s.stop, s.step
if l != 0 or r != input_shape[dim_id]:
# if both l and r are None then the dimension is not sliced
if (l != 0 or r != input_shape[dim_id]) and (l is not None or r is not None):
if split_channel_dim is None:
split_channel_dim = dim_id
else:

View File

@@ -7,7 +7,7 @@ import logging as log
import numpy as np
from mo.front.common.layout import nhwc_to_nchw_permute
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import shape_array, shape_insert
from mo.front.extractor import update_ie_fields
from mo.graph.graph import Graph
from mo.graph.graph import Node, add_opoutput
@@ -113,17 +113,16 @@ class CustomSubgraphCall(MiddleReplacementPattern):
:param shape: shape to extend.
:return: 4D tensor.
"""
new_shape = int64_array(shape)
new_shape = shape_array(shape)
old_shape_len = len(shape)
for x in range(
4 - old_shape_len): # TODO think about proper way to add additional dimensions considering layout
if len(
new_shape) <= 1: # if the shape is 0D or 1D then we should add additional dimensions to batch dimension
new_shape = np.insert(new_shape, 0, 1)
# new_shape = np.array([1, shape[0], 1, 1])
# TODO think about proper way to add additional dimensions considering layout
for x in range(4 - old_shape_len):
# if the shape is 0D or 1D then we should add additional dimensions to batch dimension
if len(new_shape) <= 1:
new_shape = shape_insert(new_shape, 0, 1)
else:
new_shape = np.insert(new_shape, 1, 1)
new_shape = shape_insert(new_shape, 1, 1)
return new_shape
@staticmethod

View File

@@ -4,6 +4,7 @@
import numpy as np
from extensions.ops.split import Split
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.concat import Concat
@@ -49,7 +50,7 @@ class DecomposeBidirectionalRNNSequence(MiddleReplacementPattern):
node.graph,
name=node.name + '/SplittedBiLSTM/{}/'.format(direction),
attrs={'value': np.take(node.value, [index], axis),
'shape': np.array(np.take(node.value, [index], axis).shape, dtype=np.int64)}
'shape': shape_array(np.take(node.value, [index], axis).shape)}
)
def split_data(self, data: Node):

View File

@@ -5,6 +5,7 @@ import logging as log
import numpy as np
from mo.front.common.partial_infer.utils import shape_insert
from mo.graph.graph import Graph, Node
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
@@ -149,7 +150,7 @@ class DilatedConvolution1DConverter(MiddleReplacementPattern):
for port_id in [1, 2]:
current_value = pad.in_port(port_id).get_connection().data.get_value()
new_value_node = Const(pad.graph, {'name': pad.soft_get('name', pad.id) + '/value_{}'.format(port_id),
'value': np.insert(current_value, unsqueeze_axis.item(), 0),
'value': shape_insert(current_value, unsqueeze_axis.item(), 0),
'override_output_shape': True}).create_node()
pad.in_port(port_id).disconnect()
pad.in_port(port_id).connect(new_value_node.out_port(0))

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import shape_insert
from mo.graph.graph import Node, Graph
from mo.middle.passes.fusing.helpers import get_tensor_in_port, get_value_in_port
from mo.middle.replacement import MiddleReplacementPattern
@@ -57,7 +58,7 @@ class EltwiseChecker(MiddleReplacementPattern):
self.set_flags_to_false(node, ['can_be_scaleshift'])
return
broadcasted_value_shape = np.insert(value_shape, 0, [1] * (len(tensor_shape) - len(value_shape)))
broadcasted_value_shape = shape_insert(value_shape, 0, [1] * (len(tensor_shape) - len(value_shape)))
feature_dim = min(1, tensor_shape.size - 1) if node.graph.graph['layout'] == 'NCHW' else -1
if feature_channel is not None:

View File

@@ -4,7 +4,7 @@
import numpy as np
from mo.front.common.layout import get_features_dim, shape_for_layout
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_insert, is_fully_defined
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, Node
from mo.middle.replacement import MiddleReplacementPattern
@@ -127,8 +127,8 @@ def normalize_eltwise_inputs(graph: Graph):
producer_port_shape = producer_port.data.get_shape()
new_shape = producer_port_shape.copy()
for unsqueeze_dim in unsqueeze_dims:
new_shape = np.insert(new_shape, unsqueeze_dim, 1)
if producer_port_value is not None:
new_shape = shape_insert(new_shape, unsqueeze_dim, 1)
if producer_port_value is not None and is_fully_defined(new_shape):
unsqueeze_node.out_port(0).data.set_value(np.reshape(producer_port_value, new_shape))
else:
unsqueeze_node.out_port(0).data.set_shape(new_shape)

View File

@@ -4,6 +4,7 @@
import numpy as np
from extensions.ops.tensor_iterator import TensorIterator
from mo.front.common.partial_infer.utils import shape_delete
from mo.graph.graph import Graph, add_opoutput
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
@@ -90,7 +91,7 @@ class GRUAndRNNToTensorIterator(MiddleReplacementPattern):
for out in outputs:
add_opoutput(body, out.id, 0, False)
outputs[0].shape = np.delete(outputs[0].shape.copy(), rnn_layer.sequence_dim)
outputs[0].shape = shape_delete(outputs[0].shape, rnn_layer.sequence_dim)
output_unsqueeze_dim = Const(body, dict(name=rnn_layer.name + '/output_unsqueeze_dim',
value=rnn_layer.sequence_dim)).create_node_with_data()
output_unsqueeze = Unsqueeze(body, dict(name=rnn_layer.name + '/output_unsqueeze/', internal_layer_id=2))

View File

@@ -6,7 +6,7 @@ import numpy as np
from typing import List
from extensions.ops.interpolate import Interpolate
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_array
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, Node, rename_nodes
from mo.middle.replacement import MiddleReplacementPattern
@@ -200,7 +200,7 @@ def replace_sequence(seq: List[Node], graph: Graph):
axis_to_size = sorted(list(dict(dims_and_scales_).items()), key=lambda x: x[0])
axes_of_node = int64_array([z[0] for z in axis_to_size])
sizes = int64_array([z[1] for z in axis_to_size])
sizes = shape_array([z[1] for z in axis_to_size])
scales = np.ones(len(axis_to_size))
else:
for interp in seq:
@@ -210,7 +210,7 @@ def replace_sequence(seq: List[Node], graph: Graph):
axis_to_size = sorted(dims_and_scales_, key=lambda x: x[0])
axes_of_node = int64_array([z[0] for z in axis_to_size])
sizes = int64_array([z[1] for z in axis_to_size])
sizes = shape_array([z[1] for z in axis_to_size])
scales = np.array([z[2] for z in axis_to_size])
fst_interp_node = seq[0]

View File

@@ -1,11 +1,10 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from extensions.middle.RNNSequenceNormalizeToIE import RNNSequenceNormalize
from extensions.ops.lstm_cell import LSTMCell
from extensions.ops.tensor_iterator import TensorIterator
from mo.front.common.partial_infer.utils import shape_delete
from mo.graph.graph import Graph, add_opoutput
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
@@ -84,7 +83,7 @@ class LSTMToTensorIterator(MiddleReplacementPattern):
for out in outputs:
add_opoutput(body, out.id, 0, False)
outputs[0].shape = np.delete(outputs[0].shape, lstm.sequence_dim)
outputs[0].shape = shape_delete(outputs[0].shape, lstm.sequence_dim)
output_unsqueeze = Unsqueeze(body, dict(name=lstm.name + 'output_unsqueeze', internal_layer_id=2))
unsqueeze_dim_data = Const(body, {'name': lstm.name + '/output_unsqueeze_dim',
'value': [lstm.sequence_dim]}).create_node_with_data()

View File

@@ -4,7 +4,7 @@
import numpy as np
from extensions.ops.transpose import Transpose
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_insert
from mo.graph.graph import Graph
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
@@ -184,14 +184,12 @@ class MXNetRNNSequenceNormalize(MiddleReplacementPattern):
mxnet_shape = lstm.out_node(0).shape.copy()
if lstm.batch_dim == 0:
mo_shape = np.array([input.shape[lstm.batch_dim], input.shape[lstm.sequence_dim], lstm.hidden_size],
dtype=np.int64)
mo_shape = int64_array([input.shape[lstm.batch_dim], input.shape[lstm.sequence_dim], lstm.hidden_size])
else:
mo_shape = np.array([input.shape[lstm.sequence_dim], input.shape[lstm.batch_dim], lstm.hidden_size],
dtype=np.int64)
mo_shape = int64_array([input.shape[lstm.sequence_dim], input.shape[lstm.batch_dim], lstm.hidden_size])
if lstm.has_num_directions:
mo_shape = np.insert(mo_shape, 1, np.int64(num_directions))
mo_shape = shape_insert(mo_shape, 1, np.int64(num_directions))
lstm_name = lstm.soft_get('name', lstm.id)

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import shape_insert, int64_array
from mo.graph.graph import Graph, Node
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.concat import Concat
@@ -131,9 +132,9 @@ class MXNetSplitLayersToRNNSequence(MiddleReplacementPattern):
output_data = rnn_layer.out_node(0)
# Output nodes creating:
state_size = np.array([input.shape[rnn_layer.batch_dim], rnn_layer.hidden_size], dtype=np.int64)
state_size = int64_array([input.shape[rnn_layer.batch_dim], rnn_layer.hidden_size])
if rnn_layer.has_num_directions:
state_size = np.insert(state_size, 0, direction)
state_size = shape_insert(state_size, 0, direction)
output_hidden = Op._create_data_node(
rnn_layer.graph,

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import compatible_dims
from mo.graph.graph import Graph
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.op import Op
@@ -105,7 +106,7 @@ class ONNXRNNSequenceNormalize(MiddleReplacementPattern):
for x in (W, R)]
input_size = match['input'].shape[2]
assert input_size == W.shape[-1]
assert compatible_dims(input_size, W.shape[-1])
# Reorder gates: iofc --> fico
gate_reorder = rnn_layer.gate_order

View File

@@ -1,6 +1,8 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging as log
from mo.front.common.partial_infer.utils import is_fully_defined, unmask_shape, shape_array, dynamic_dimension_value
from mo.graph.graph import Graph
from mo.middle.passes.infer import partial_infer
from mo.middle.replacement import MiddleReplacementPattern
@@ -18,4 +20,21 @@ class PartialInfer(MiddleReplacementPattern):
return []
def find_and_replace_pattern(self, graph: Graph):
dynamic_inputs = {}
for parameter in graph.get_op_nodes(op='Parameter'):
param_shape = parameter.soft_get('shape', shape_array(dynamic_dimension_value))
if not is_fully_defined(param_shape):
parameter_name = parameter.soft_get('name', parameter.id)
dynamic_inputs[parameter_name] = param_shape
if dynamic_inputs:
log.error('The model contains input(s) with partially defined shapes: {}. '
'Starting from the 2022.1 release the Model Optimizer can generate an IR with partially defined '
'input shapes ("-1" dimension in the TensorFlow model or dimension with string value in the ONNX '
'model). Some of the OpenVINO plugins require model input shapes to be static, so you should '
'call "reshape" method in the Inference Engine and specify static input shapes. For optimal '
'performance, it is still recommended to update input shapes with fixed ones using "--input" or '
'"--input_shape" command-line parameters.'
.format(','.join('name="{}" shape="{}"'.format(name, unmask_shape(shape))
for name, shape in dynamic_inputs.items())),
extra={'is_warning': True})
partial_infer(graph)

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_delete
from mo.front.tf.graph_utils import create_op_node_with_second_input
from mo.graph.graph import Graph
from mo.middle.replacement import MiddleReplacementPattern
@@ -142,7 +142,7 @@ class RNNSequenceNormalize(MiddleReplacementPattern):
for i in rnn_layer.out_nodes():
old_data_node = rnn_layer.out_node(i)
old_shape = old_data_node.shape.copy()
new_shape = np.delete(old_shape, direction_dim[i])
new_shape = shape_delete(old_shape, direction_dim[i])
data = Op._create_data_node(graph, name=rnn_layer.name + '/Out/{}/'.format(i), attrs={'shape': new_shape})
graph.remove_edge(rnn_layer.id, old_data_node.id)

View File

@@ -1,50 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.graph.graph import Graph
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
from mo.utils.error import Error
class SequenceLengthToMask(MiddleReplacementPattern):
"""
Convert a sequence length to a sequence mask for CTCGreedyDecoder if its value is available.
"""
enabled = True
def run_before(self):
from extensions.middle.pass_separator import MiddleFinish
return [MiddleFinish]
def find_and_replace_pattern(self, graph: Graph):
for ctc_greedy_decoder in graph.get_op_nodes(op='CTCGreedyDecoder', use_mask_format=True):
ctc_greedy_decoder_name = ctc_greedy_decoder.soft_get('name', ctc_greedy_decoder.id)
sequence_length_value = ctc_greedy_decoder.in_port(1).data.get_value()
if sequence_length_value is None:
raise Error('The second input to the CTCGreedyDecoder node "{}" is not constant. This case is not '
'supported with the Inference Engine.'.format(ctc_greedy_decoder_name))
# transform a sequence length to a sequence mask
logits_shape = ctc_greedy_decoder.in_port(0).data.get_shape()
assert logits_shape is not None and len(logits_shape) == 3, \
"Incorrect shape for logits input of {} node".format(ctc_greedy_decoder_name)
batch_size = logits_shape[1]
time_size = logits_shape[0]
mask_value = np.zeros([batch_size, time_size], dtype=np.float)
for sample_ind, sample_seq_length in enumerate(sequence_length_value):
mask_value[sample_ind, 0:sample_seq_length] = 1
mask_value = np.transpose(mask_value)
# create Const node with computed mask value
mask_node = Const(graph, {'name': ctc_greedy_decoder_name + '/Mask',
'value': mask_value}).create_node()
# connect computed mask to CTCGreedyDecoder node
ctc_greedy_decoder.in_port(1).get_connection().set_source(mask_node.out_port(0))
# remove attribute-marker
del ctc_greedy_decoder['use_mask_format']

View File

@@ -4,7 +4,8 @@
import numpy as np
from extensions.ops.split import VariadicSplit
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, dynamic_dimension, dynamic_dimension_value, \
is_dynamic_slice
from mo.front.tf.graph_utils import create_op_with_const_inputs
from mo.graph.graph import Graph, Node
from mo.graph.perm_inputs import PermuteInputs
@@ -115,8 +116,11 @@ class StridedSliceNormalizer(MiddleReplacementPattern):
def normalize_strided_slice(graph: Graph, node: Node):
input_shape = node.in_port(0).data.get_shape()
input_rank = len(input_shape)
begin, _, _ = StridedSlice.validate_inputs_and_get_args(node)
slice_rank = len(begin)
begin = node.in_port(1).data.get_value()
if begin is not None:
slice_rank = len(begin)
else:
slice_rank = input_rank + np.count_nonzero(node.new_axis_mask) - np.count_nonzero(node.shrink_axis_mask)
StridedSlice.align_mask_with_slice_rank(node, slice_rank) # if StridedSlice is created after partial_infer
StridedSliceNormalizer.normalize_slices_attr(node)
@@ -239,6 +243,8 @@ class StridedSliceNormalizer(MiddleReplacementPattern):
res_slices.append(s)
if not (node.new_axis_mask[i] or node.ellipsis_mask[i]):
res_slices[-1] = slice(*res_slices[-1].indices(data_shape[in_idx])) # convert negative begins/ends
if res_slices[-1] != dynamic_dimension_value and data_shape[in_idx] is not dynamic_dimension and \
res_slices[-1] is not None and not is_dynamic_slice(res_slices[-1]):
res_slices[-1] = slice(*res_slices[-1].indices(data_shape[in_idx])) # convert negative begins/ends
in_idx += 1
node.slices = np.array(res_slices)

View File

@@ -5,6 +5,7 @@ import logging as log
import numpy as np
from mo.front.common.partial_infer.utils import compatible_dims
from mo.middle.replacement import MiddleReplacementPattern
@@ -71,7 +72,7 @@ class ConditionChecks(MiddleReplacementPattern):
''.format(match['minimum_data'].soft_get('name'), match['Strided_slice_data'].value)
)
else:
assert match['Strided_slice_data'].value == match['minimum_data'].value, \
assert compatible_dims(match['Strided_slice_data'].value, match['minimum_data'].value), \
'Values do not match: {} and {}'.format(match['Strided_slice_data'].value, match['minimum_data'].value)
# Check that bound for Condition and Inputs/Outputs sizes match

View File

@@ -4,9 +4,8 @@
from collections import deque
from copy import deepcopy
import numpy as np
from extensions.ops.tensor_iterator import TensorIterator
from mo.front.common.partial_infer.utils import shape_insert
from mo.graph.graph import Node, Graph, add_opoutput
from mo.middle.replacement import MiddleReplacementPattern
from mo.ops.const import Const
@@ -270,7 +269,7 @@ class TensorIteratorMerge(MiddleReplacementPattern):
shape = ext_inp['internal_data_id'].shape.copy()
assert not ext_inp['internal_data_id'].has_valid('value')
new_input_data = Op._create_data_node(body, ext_inp['internal_data_id'].name + '/UnsqueezedInput',
dict(shape=np.insert(shape, ext_inp['axis'], 1)))
dict(shape=shape_insert(shape, ext_inp['axis'], 1)))
reshape_op = Squeeze(body, dict(name=ext_inp['internal_data_id'].name + '/InputSqueeze'))
reshape_dim_data = Const(body, {'name': ext_inp['internal_data_id'].name + '/ReshapeDim',
@@ -304,7 +303,6 @@ class TensorIteratorMerge(MiddleReplacementPattern):
if ext_out['axis'] is not None:
# Insert unsqueezing resize at output port that has partitioning
assert not ext_out['internal_data_id'].has_valid('value')
reshape_op = Unsqueeze(body, dict(name=ext_out['internal_data_id'].name + '/OutputUnsqueeze'))
reshape_dim_data = Const(body, {'name': ext_out['internal_data_id'].name + '/ReshapeDim',
'value': ext_out['axis']}).create_node_with_data()

View File

@@ -8,11 +8,12 @@ from mo.ops.op import Op
class BlockLSTM(Op):
op = 'BlockLSTM'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'op': __class__.op,
'infer': __class__.infer,
'op': self.op,
'infer': self.infer,
'type': None,
}
super().__init__(graph, mandatory_props, attrs)
@@ -42,7 +43,7 @@ class BlockLSTM(Op):
input_shape = node.in_node(0).shape
assert len(input_shape) == 3
out_shape = input_shape
node.out_node(0).shape = out_shape
if len(node.out_nodes()) > 1:
node.out_node(1).shape = out_shape
out_shape = input_shape.copy()
node.out_port(0).data.set_shape(out_shape)
if node.is_out_port_connected(1):
node.out_port(1).data.set_shape(out_shape)

View File

@@ -0,0 +1,21 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.graph.graph import Graph
from mo.ops.op import Op
class ClibByValueTF(Op):
"""
The ClipByValue from TF which will be replaced with a front transformation.
"""
enabled = False
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'op': 'ClipByValueTF',
'out_ports_count': 1,
'in_ports_count': 3,
'infer': None
}
super().__init__(graph, mandatory_props, attrs)

View File

@@ -1,8 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -13,7 +12,7 @@ class Enter(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'in_ports_count': 1,
'infer': Enter.enter_infer,
}
@@ -21,9 +20,9 @@ class Enter(Op):
@staticmethod
def enter_infer(node: Node):
output_shape = node.in_node(0).shape
output_value = node.in_node(0).value
output_shape = node.in_port(0).data.get_shape()
output_value = node.in_port(0).data.get_value()
for _, out_node in node.graph.out_edges(node.id):
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)
node.graph.node[out_node]['shape'] = shape_array(output_shape)
node.graph.node[out_node]['value'] = None if output_value is None else output_value.copy()

View File

@@ -4,13 +4,14 @@
import numpy as np
from mo.front.common.layout import shape_for_layout, get_batch_dim, get_features_dim
from mo.front.common.partial_infer.utils import int64_array, tf_window_op_pad_infer
from mo.front.common.partial_infer.utils import tf_window_op_pad_infer, shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class ExtractImagePatches(Op):
op = "ExtractImagePatches"
enabled = False
def __init__(self, graph: Graph, attrs: dict):
assert 'spatial_dims' in attrs, \
@@ -47,7 +48,7 @@ class ExtractImagePatches(Op):
N = input_shape[get_batch_dim(layout, 4)]
C = input_shape[get_features_dim(layout, 4)]
size_spatial = int64_array(node.sizes)[node.spatial_dims]
size_spatial = shape_array(node.sizes)[node.spatial_dims]
input_spatial_shape = input_shape[node.spatial_dims]
stride_spatial_shape = node.strides[node.spatial_dims]
@@ -66,4 +67,4 @@ class ExtractImagePatches(Op):
height=output_spatial_shape[0],
width=output_spatial_shape[1])
node.out_port(0).data.set_shape(int64_array(out_shape))
node.out_port(0).data.set_shape(out_shape)

View File

@@ -55,17 +55,17 @@ class GRUCell(Op):
'activation_alpha',
'activation_beta',
'clip',
('linear_before_reset', lambda node: bool_to_str(node, 'linear_before_reset')),
('linear_before_reset', lambda node: bool_to_str(node, 'linear_before_reset')),
]
@staticmethod
def infer(node: Node):
assert len(node.out_nodes()) in [1, 2]
hidden_shape = node.in_node(1).shape.copy()
hidden_shape = node.in_port(1).data.get_shape().copy()
mark_input_bins(node, start_port=2)
node.out_node(0).shape = hidden_shape
node.out_port(0).data.set_shape(hidden_shape)
hidden_size = hidden_shape[1]
if node.has_valid('hidden_size'):

View File

@@ -1,7 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.graph.graph import Node, Graph
from mo.front.common.partial_infer.elemental import copy_shape_infer
from mo.graph.graph import Graph
from mo.ops.op import Op
@@ -10,10 +10,10 @@ class GatherTree(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'op': __class__.op,
'type': __class__.op,
'op': self.op,
'type': self.op,
'version': 'opset1',
'infer': __class__.infer,
'infer': copy_shape_infer,
'in_ports_count': 4,
'out_ports_count': 1,
}
@@ -21,7 +21,3 @@ class GatherTree(Op):
def supported_attrs(self):
return []
@staticmethod
def infer(node: Node):
node.out_node().shape = node.in_node(0).shape

View File

@@ -4,7 +4,7 @@
import logging as log
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, is_fully_defined, dynamic_dimension_value
from mo.graph.graph import Node, Graph
from mo.middle.passes.infer import partial_infer
from mo.ops.op import Op
@@ -148,6 +148,7 @@ class If(Op):
:param if_node: The If node to update output ports and shapes
:return: None
"""
node_name = if_node.soft_get('name', if_node.id)
then_outputs = [node for node in if_node.then_graph.get_op_nodes() if node.has('output_id')]
else_outputs = [node for node in if_node.else_graph.get_op_nodes() if node.has('output_id')]
@@ -179,20 +180,45 @@ class If(Op):
# outputs will have the same shapes as then_body results
use_then_shape = else_contains_fake_outputs or not then_contains_fake_outputs
cond_value = if_node.in_port(0).data.get_value()
for port_id in outputs_mapping:
then_else_nodes = outputs_mapping[port_id]
assert 'then_graph' in then_else_nodes.keys(), 'then_graph does not connect with If.out_port[{0}] ' \
'in {1} node!'.format(port_id, if_node.name)
'in {1} node!'.format(port_id, node_name)
assert 'else_graph' in then_else_nodes.keys(), 'else_graph does not connect with If.out_port[{0}] ' \
'in {1} node!'.format(port_id, if_node.name)
'in {1} node!'.format(port_id, node_name)
then_shape = then_else_nodes['then_graph'].in_port(0).data.get_shape()
then_value = then_else_nodes['then_graph'].in_port(0).data.get_value()
else_shape = then_else_nodes['else_graph'].in_port(0).data.get_shape()
else_value = then_else_nodes['else_graph'].in_port(0).data.get_value()
if is_fully_defined(cond_value):
if cond_value.item() is True:
if then_value is not None:
if_node.out_port(port_id).data.set_value(then_value)
else:
if_node.out_port(port_id).data.set_shape(then_shape)
else:
if else_value is not None:
if_node.out_port(port_id).data.set_value(else_value)
else:
if_node.out_port(port_id).data.set_shape(else_shape)
else:
if then_contains_fake_outputs ^ else_contains_fake_outputs:
# if exactly one of the outputs is fake then use another one
if_node.out_port(port_id).data.set_shape(then_shape if use_then_shape else else_shape)
else:
# find "intersection" which is equal to the dimension value if corresponding dimensions are equal
# and dynamic otherwise
assert len(then_shape) == len(else_shape), 'Ranks of "then" and "else" output tensors are ' \
'different for node {} for port {}'.format(node_name,
port_id)
output_shape = [d1 if is_fully_defined(d1) and is_fully_defined(d2) and d1 == d2 else
dynamic_dimension_value for d1, d2 in zip(then_shape, else_shape)]
if_node.out_port(port_id).data.set_shape(output_shape)
if not (then_shape == else_shape).all():
log.debug("If node {0} has dynamic output [{1}] because output shape from then_graph is {2} and "
"else_graph {3}".format(if_node.name, port_id, then_shape, else_shape))
if_node.out_port(port_id).data.set_shape(then_shape if use_then_shape else else_shape)
@staticmethod
def update_if_output_ports_type(if_node: Node):

View File

@@ -12,11 +12,11 @@ class LSTM(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': 'RNNSequence', # should be never emitted to IR; for debugging purposes
'op': __class__.op,
'op': self.op,
'blobs_wrb': False, # input blobs have three separate components W, R and B like in ONNX/LSTM
'has_num_directions': False, # if True, output shape has 4 dimensions; 3D otherwise
'direction': 'forward',
'infer': __class__.infer,
'infer': self.infer,
'multiplier': 4,
'gate_order': None,
'normalized': False,

View File

@@ -3,17 +3,16 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class LookupTableInsert(Op):
'''
"""
This operation has only output control flow edges and no output data edges in some models.
And for these cases implementation of the shape inference is needed since the shape inference is executed
before control flow edges resolving. This operation has non-tensor output so the output shape is empty.
'''
"""
enabled = False
op = 'LookupTableInsert'
@@ -42,4 +41,4 @@ class LookupTableInsert(Op):
# set output shape that must be empty
# since output is not a tensor
node.out_port(0).data.set_shape(int64_array([]))
node.out_port(0).data.set_shape([])

View File

@@ -5,7 +5,8 @@ import logging as log
import numpy as np
from mo.front.common.partial_infer.utils import assign_dims_to_weights, int64_array
from mo.front.common.partial_infer.utils import assign_dims_to_weights, int64_array, compatible_dims, compatible_shapes, \
shape_array, is_fully_defined, shape_delete, shape_insert
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -62,14 +63,14 @@ class MatMul(Op):
if rank != 1 and ((i == 0 and transpose_a) or (i == 1 and transpose_b)):
input_shape[-2], input_shape[-1] = input_shape[-1], input_shape[-2]
if rank == 1:
input_shape = np.insert(input_shape, int(i == 1), 1)
input_shape = shape_insert(input_shape, int(i == 1), 1)
max_shape_length = max(input_shapes[0].size, input_shapes[1].size)
input_shape = np.insert(input_shape, 0, [1] * (max_shape_length - input_shape.size))
input_shape = shape_insert(input_shape, 0, [1] * (max_shape_length - input_shape.size))
transformed_shapes.append(input_shape)
A_shape = transformed_shapes[0]
B_shape = transformed_shapes[1]
A_shape = shape_array(transformed_shapes[0])
B_shape = shape_array(transformed_shapes[1])
assert A_shape.size == B_shape.size, \
"Shapes were not aligned by length for MatMul `{}`. Shapes: `{}`".format(node_name, transformed_shapes)
@@ -83,7 +84,7 @@ class MatMul(Op):
if B_shape[i] == 1:
B_shape[i] = A_shape[i]
assert np.array_equal(A_shape[:-2], B_shape[:-2]), \
assert compatible_shapes(A_shape[:-2], B_shape[:-2]), \
"MatMul input shapes are incorrect. BATCH_DIMs are not equal. Node: {}. Aligned shapes: {}" \
"".format(node_name, transformed_shapes)
@@ -98,11 +99,16 @@ class MatMul(Op):
"""
a_value = node.in_port(0).get_source().data.get_value()
b_value = node.in_port(1).get_source().data.get_value()
if a_value is not None and b_value is not None:
if is_fully_defined(a_value) and is_fully_defined(b_value):
if node.transpose_a:
a_value = transpose(a_value)
if node.transpose_b:
b_value = transpose(b_value)
# np.matmul does not work correctly with masked arrays, so need explicitly convert inputs to regular arrays
if isinstance(a_value, np.ma.masked_array):
a_value = a_value.filled()
if isinstance(b_value, np.ma.masked_array):
b_value = b_value.filled()
node.out_port(0).data.set_value(np.matmul(a_value, b_value))
@staticmethod
@@ -121,18 +127,18 @@ class MatMul(Op):
A_shape, B_shape = MatMul.shape_alignment(node)
log.debug('MatMul `{}` aligned input shapes: {}'.format(name, [A_shape, B_shape]))
assert A_shape[-1] == B_shape[-2], \
assert compatible_dims(A_shape[-1], B_shape[-2]), \
"MatMul input shapes are incorrect. COL_INDEX_DIMs are not equal. Node: {}. Shapes: {}" \
"".format(name, [A_shape, B_shape])
output_shape = np.concatenate((A_shape[:-1], B_shape[-1:]))
output_shape = np.ma.concatenate((A_shape[:-1], B_shape[-1:]))
if node.in_port(0).data.get_shape().size == 1:
assert output_shape[-2] == 1
output_shape = np.delete(output_shape, -2, 0)
assert compatible_dims(output_shape[-2], 1)
output_shape = shape_delete(output_shape, -2)
if node.in_port(1).data.get_shape().size == 1:
assert output_shape[-1] == 1
output_shape = np.delete(output_shape, -1, 0)
assert compatible_dims(output_shape[-1], 1)
output_shape = shape_delete(output_shape, -1)
node.out_port(0).data.set_shape(output_shape)
@@ -149,8 +155,8 @@ def transpose(value):
else:
return np.transpose(value, [*range(0, num_of_dims - 2), num_of_dims - 1, num_of_dims - 2])
# MatMul-like operation from frameworks
# MatMul-like operation from frameworks
class GemmONNX(Op):
"""
Represents Gemm operation from ONNX
@@ -163,7 +169,7 @@ class GemmONNX(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'op': __class__.op,
'op': self.op,
'transpose_a': False,
'transpose_b': False,
'alpha': 1,
@@ -182,9 +188,9 @@ class FullyConnected(Op):
def __init__(self, graph: Graph, attrs: dict):
super().__init__(graph, {
'op': __class__.op,
'type': __class__.op,
'infer': __class__.infer,
'op': self.op,
'type': self.op,
'infer': self.infer,
'in_ports_count': 3,
'out_ports_count': 1,
}, attrs)
@@ -210,22 +216,21 @@ class FullyConnected(Op):
'Incorrect FullyConnected input shapes. Node: {}. Shapes: {}'.format(name, [input_shape, weights_shape])
assert weights_shape.size == 2
out_size = node.soft_get('out-size')
assert weights_shape[0] == out_size, 'weights_shape={}, out-size={}'.format(weights_shape, out_size)
assert compatible_dims(weights_shape[0], out_size), \
'weights_shape={}, out-size={}'.format(weights_shape, out_size)
if 2 in connected_in_ports:
bias_value = node.in_port(2).data.get_value()
bias_shape = node.in_port(2).data.get_shape()
assert bias_shape is not None, 'Shape was not inferred for biases of FullyConnected {}'.format(name)
assert bias_value is not None, 'Value was not inferred for biases of FullyConnected {}'.format(name)
assert np.array_equal(bias_shape, [out_size]) or np.array_equal(bias_shape, [1, out_size]), \
assert compatible_shapes(bias_shape, [out_size]) or compatible_shapes(bias_shape, [1, out_size]), \
'Incorrect FullyConnected bias shape `{}` for node {}. `out-size`={}'.format(bias_shape, node, out_size)
out_shape = int64_array([*input_shape[:-1], out_size])
node.out_port(0).data.set_shape(out_shape)
node.out_port(0).data.set_shape([*input_shape[:-1], out_size])
# MatMul-like operations for IR V6
class Gemm(MatMul):
"""
Represents GEMM operation that is acceptable to appear in v6 IRs

View File

@@ -36,8 +36,8 @@ class ONNXResize11Op(Op):
return
assert (node.is_in_port_connected(0) and (node.is_in_port_connected(2) or node.is_in_port_connected(3))), \
"One of the scales or sizes inputs must be connected to Node {} with op {}.".format(node.soft_get("name", node.id),
node.op)
"One of the scales or sizes inputs must be connected to Node {} with op {}." \
"".format(node.soft_get("name", node.id), node.op)
assert node.coordinate_transformation_mode != 'tf_crop_and_resize', \
'Mode tf_crop_and_resize is not supported for op {} with name {}'.format(node.op,
@@ -57,6 +57,6 @@ class ONNXResize11Op(Op):
"Node {} with op {} has no value in input port 3".format(node.soft_get("name", node.id), node.op)
output_shape = input_shape.copy()
spatial_dimension_indices = range(2, len(input_shape))
output_shape[spatial_dimension_indices] = int64_array(sizes)[2:]
output_shape[spatial_dimension_indices] = sizes[2:]
node.out_port(0).data.set_shape(output_shape.copy())
node.out_port(0).data.set_shape(output_shape)

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import mark_input_bins
from mo.front.common.partial_infer.utils import mark_input_bins, shape_array, shape_insert
from mo.graph.graph import Node, Graph, add_opoutput
from mo.ops.op import Op
@@ -14,11 +14,11 @@ class RNN(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': 'RNNSequence', # should be never emitted to IR; for debugging purposes
'op': __class__.op,
'op': self.op,
'blobs_wrb': False,
'has_num_directions': False,
'direction': 'forward',
'infer': __class__.infer,
'infer': self.infer,
'multiplier': 1,
'gate_order': np.array([0]), # Only one gate in this cell
'normalized': False,
@@ -101,10 +101,10 @@ def rnn_infer(node: Node, out_ports=None):
node.in_node(port).value = np.repeat(node.in_node(port).value, input_shape[i], axis=i)
node.in_node(port).shape[i] = input_shape[i]
out_shape = np.array([input_shape[node.sequence_dim], input_shape[node.batch_dim], node.hidden_size], dtype=np.int64)
out_shape = [input_shape[node.sequence_dim], input_shape[node.batch_dim], node.hidden_size]
if node.batch_dim == 0:
out_shape = np.array([input_shape[node.batch_dim], input_shape[node.sequence_dim], node.hidden_size], dtype=np.int64)
out_shape = [input_shape[node.batch_dim], input_shape[node.sequence_dim], node.hidden_size]
num_directions = 2 if node.direction in ['bidirectional'] else 1
if node.has_num_directions:
@@ -113,7 +113,7 @@ def rnn_infer(node: Node, out_ports=None):
out_shape[-1] *= num_directions
else:
# ONNX-like, insert extra dimension to output shape for num_directions
out_shape = np.insert(out_shape, 1, np.int64(num_directions))
out_shape = shape_insert(out_shape, 1, np.int64(num_directions))
# 0 output is required creating it if doesn't exist
if 0 not in node.out_nodes():
@@ -129,9 +129,9 @@ def rnn_infer(node: Node, out_ports=None):
node.out_port(0).data.set_shape(out_shape)
# 3. Extra outputs for hidden/cell states shape calculations (optional)
state_size = np.array([input_shape[node.batch_dim], node.hidden_size], dtype=np.int64)
state_size = [input_shape[node.batch_dim], node.hidden_size]
if node.has_num_directions:
state_size = np.insert(state_size, 0, num_directions)
state_size = shape_insert(state_size, 0, num_directions)
if node.multilayers:
# For multilayer case state sizes from every layer will be concatenated by last axis
@@ -152,4 +152,4 @@ def rnn_infer(node: Node, out_ports=None):
add_opoutput(node.graph, data_node.id, 0, False)
else:
data_node = node.out_node(i)
data_node.shape = state_size.copy()
data_node.shape = shape_array(state_size)

View File

@@ -25,10 +25,9 @@ class RNNCell(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': __class__.op,
'op': __class__.op,
'version': 'experimental',
'infer': __class__.infer,
'type': self.op,
'op': self.op,
'infer': self.infer,
'in_ports_count': 4,
'out_ports_count': 1,
'version': 'opset3',

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, is_fully_defined
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.graph.perm_inputs import PermuteInputs
@@ -23,6 +23,25 @@ reduce_map = {
}
def reduce_helper(func: callable, x: np.array, axis: tuple, keepdims: bool):
"""
Performs the reduction of input data tensor "x" over axis "axis" with function "func" and optionally removes reduced
dimensions (if "keepdims" is False). If the input tensor has dynamic values, all elements of the result tensor
are changed to be dynamic.
:param func: numpy reduce function
:param x: the data to perform reduction on
:param axis: the axis for reduction
:param keepdims: flag specifying whether keep reduce dimensions or not
:return: the result tensor
"""
result = func(x, axis=axis, keepdims=keepdims)
if is_fully_defined(x):
return result
else:
return np.ma.masked_array(result, mask=np.ones(result.shape, dtype=np.bool))
def reduce_infer(node: Node):
connected_in_ports = [port for port in node.in_ports().values() if not port.disconnected()]
assert len(connected_in_ports) == 2, \
@@ -47,7 +66,7 @@ def reduce_infer(node: Node):
in_value = in_data.get_value()
if in_value is not None:
value = reduce_map[node.op](in_value.copy(), axis=tuple(axis), keepdims=node.keep_dims)
value = reduce_helper(reduce_map[node.op], in_value.copy(), axis=tuple(axis), keepdims=node.keep_dims)
node.out_port(0).data.set_value(value)
else:
used_dims = np.zeros(len(in_shape), dtype=np.bool)
@@ -133,6 +152,7 @@ class ReduceL1(ReduceOp):
op_type = 'ReduceL1'
version = 'opset4'
class ReduceL2(ReduceOp):
op = 'ReduceL2'
op_type = 'ReduceL2'

View File

@@ -13,19 +13,20 @@ class Reverse(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
# 'type': __class__.op, # Internal MO primitive
'type': None,
'axis': None,
'op': __class__.op,
'op': self.op,
'in_ports_count': 2,
'out_ports_count': 1,
'infer': __class__.infer,
'infer': self.infer,
}
super().__init__(graph, mandatory_props, attrs)
@staticmethod
def infer(node):
input_data_shape = node.in_node(0).shape
assert input_data_shape is not None
input_shape = node.in_port(0).data.get_shape()
input_value = node.in_port(0).data.get_value()
assert input_shape is not None
if not node.has_valid('axis'):
assert 1 in node.in_nodes()
assert node.in_node(1).has_valid('value')
@@ -37,7 +38,7 @@ class Reverse(Op):
assert node.has_valid('axis')
assert len(node.out_nodes()) == 1
node.out_node().shape = input_data_shape.copy()
if node.in_node().value is not None:
node.out_node().value = np.flip(node.in_node().value, node['axis'])
assert np.array_equal(int64_array(node.out_node().value.shape), input_data_shape)
if input_value is not None:
node.out_port(0).data.set_value(np.flip(input_value, node.axis))
else:
node.out_port(0).data.set_shape(input_shape)

View File

@@ -1,8 +1,6 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.common.layout import get_height_dim, get_width_dim
from mo.front.common.partial_infer.utils import int64_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -45,10 +43,9 @@ class TFResize(Op):
len_msg = "Op {} with name {} supports only resize with respect to height and width dimension simultaneously"
assert len(new_sizes_value) == 2, len_msg.format(node_name, node.op)
output_shape = int64_array(input_shape.copy())
output_shape = input_shape.copy()
layout = node.graph.graph['layout']
output_shape[get_height_dim(layout, input_rank)] = new_sizes_value[0]
output_shape[get_width_dim(layout, input_rank)] = new_sizes_value[1]
output_shape[1] = new_sizes_value[0]
output_shape[2] = new_sizes_value[1]
node.out_port(0).data.set_shape(output_shape)

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -13,7 +14,7 @@ class TensorArray(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'infer': TensorArray.array_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -35,9 +36,9 @@ class TensorArray(Op):
node.graph.node[out_node]['value'] = np.array(output_value)
output_shape = node.graph.node[out_node]['value'].shape
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['shape'] = shape_array(output_shape)
node.graph.node[out_node]['element_shape'] = np.array(element_shape)
node.graph.node[out_node]['element_shape'] = shape_array(element_shape)
node.graph.node[out_node]['size'] = size.value
# 1 port flow
if 1 in node.out_nodes().keys():
@@ -45,4 +46,4 @@ class TensorArray(Op):
out_node = node.out_node(1).id
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['shape'] = shape_array(output_shape)

View File

@@ -1,8 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
from mo.utils.utils import symm_match_shapes
@@ -14,7 +13,7 @@ class TensorArrayGather(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'infer': TensorArrayGather.array_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -24,8 +23,6 @@ class TensorArrayGather(Op):
assert len(node.in_nodes()) == 3
handle = node.in_node(0)
indices = node.in_node(1)
flow_in = node.in_node(2)
ta_node = Node(node.graph, str(handle.value))
@@ -34,16 +31,12 @@ class TensorArrayGather(Op):
else:
ta_node['element_shape'] = node.element_shape
data_shape = ta_node['element_shape']
assert -1 not in data_shape or data_shape.size == 2 and data_shape[0] == -1 and data_shape[1] != -1
assert ta_node.has_valid('size')
size = ta_node['size']
assert size > 0
output_shape = [size] + [data_shape[i] for i in range(len(data_shape))]
output_value = None
for _, out_node in node.graph.out_edges(node.id):
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)
node.graph.node[out_node]['shape'] = shape_array(output_shape)
node.graph.node[out_node]['value'] = None

View File

@@ -1,8 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -13,7 +12,7 @@ class TensorArrayReader(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'infer': TensorArrayReader.array_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -23,17 +22,10 @@ class TensorArrayReader(Op):
assert len(node.in_nodes()) == 3
handle = node.in_node(0)
index = node.in_node(1)
flow_in = node.in_node(2)
ta_node = Node(node.graph, str(handle.value))
assert ta_node.has_valid('element_shape')
data_shape = ta_node['element_shape']
output_shape = data_shape
output_value = None
for _, out_node in node.graph.out_edges(node.id):
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)
node.graph.node[out_node]['shape'] = shape_array(ta_node['element_shape'])
node.graph.node[out_node]['value'] = None

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
from mo.utils.utils import match_shapes
@@ -14,7 +15,7 @@ class TensorArrayScatter(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'infer': TensorArrayScatter.array_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -22,7 +23,6 @@ class TensorArrayScatter(Op):
@staticmethod
def array_infer(node: Node):
handle = node.in_node(0)
indices = node.in_node(1)
value = node.in_node(2)
flow_in = node.in_node(3)
@@ -36,9 +36,7 @@ class TensorArrayScatter(Op):
# Assign element_shape anyway, because the original element_shape can contain -1
ta_node['element_shape'] = value.shape[1:]
output_shape = flow_in.shape
output_value = flow_in.value
#flow_out
for _, out_node in node.graph.out_edges(node.id):
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['shape'] = shape_array(flow_in.shape)
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -13,7 +14,7 @@ class TensorArraySize(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'infer': TensorArraySize.array_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -23,14 +24,12 @@ class TensorArraySize(Op):
assert len(node.in_nodes()) == 2
handle = node.in_node(0)
flow_in = node.in_node(1)
ta_node = Node(node.graph, str(handle.value))
assert ta_node.has_valid('size')
output_value = np.array(ta_node['size'])
output_shape = output_value.shape
for _, out_node in node.graph.out_edges(node.id):
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)
node.graph.node[out_node]['shape'] = shape_array(output_value.shape)
node.graph.node[out_node]['value'] = output_value.copy()

View File

@@ -1,8 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.front.common.partial_infer.utils import shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
from mo.utils.utils import match_shapes
@@ -14,7 +13,7 @@ class TensorArrayWriter(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'infer': TensorArrayWriter.array_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -36,10 +35,8 @@ class TensorArrayWriter(Op):
'Shapes are not compatible: {} and {}'.format(ta_node['element_shape'], value.shape)
ta_node['element_shape'] = value_shape
output_shape = flow_in.shape
output_value = flow_in.value
# flow_out
for _, out_node in node.graph.out_edges(node.id):
node.graph.node[out_node]['shape'] = np.array(output_shape)
node.graph.node[out_node]['value'] = None if output_value is None else np.array(output_value)
node.graph.node[out_node]['shape'] = shape_array(flow_in.shape)
node.graph.node[out_node]['value'] = None if output_value is None else output_value.copy()

View File

@@ -1,78 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class AccumOp(Op):
op = 'Accum'
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': __class__.op,
'op': __class__.op,
'version': 'extension',
'top_height': 0,
'top_width': 0,
'size_divisible_by': 0,
'have_reference': 0,
'out_ports_count': 1,
'infer': AccumOp.accum_infer
}
super().__init__(graph, mandatory_props, attrs)
def supported_attrs(self):
return [
'top_height',
'top_width',
'size_divisible_by',
'have_reference'
]
@staticmethod
def accum_infer(node: Node):
batch = node.in_node(0).shape[0]
num_inputs = len(node.in_nodes())
if node.have_reference:
assert num_inputs >= 2, "Need at least two bottom blobs (one as reference)"
total_channels = 0
for i in range(num_inputs):
total_channels += node.in_node(i).shape[1]
assert node.in_node(i).shape[0] == batch, "All accumulated layers must have same number of images"
assert total_channels >= 1, "Accumulated layers must have some channels in total"
top_height_ = node.in_node(num_inputs - 1).shape[2] # height
top_width_ = node.in_node(num_inputs - 1).shape[3] # width
height_ = top_height_
width_ = top_width_
else:
max_height = -1
max_width = -1
total_channels = 0
for i in range(num_inputs):
total_channels += node.in_node(i).shape[1]
max_height = node.in_node(i).shape[2] if node.in_node(i).shape[2] > max_height else max_height
max_width = node.in_node(i).shape[3] if node.in_node(i).shape[3] > max_width else max_width
assert node.in_node(i).shape[0] == batch, "All accumulated layers must have same number of images"
assert total_channels >= 1, "Accumulated layers must have some channels in total"
if node.size_divisible_by:
sdb = node.size_divisible_by
top_height_ = int(np.ceil(max_height / sdb) * sdb)
top_width_ = int(np.ceil(max_width / sdb) * sdb)
else:
top_height_ = node.top_height
top_width_ = node.top_width
if top_height_ > max_height and top_width_ > max_width: # Layer can specify custom top size which is larger than default
height_ = top_height_
width_ = top_width_
else: # Otherwise maximum of bottom sizes will be used
height_ = max_height
width_ = max_width
channels_ = total_channels
node.out_node(0).shape = np.array([batch, channels_, height_, width_])

View File

@@ -35,22 +35,22 @@ class Activation(Op):
class Sigmoid(Activation):
op = 'Sigmoid'
operation = staticmethod(lambda x: 1 / (1 + np.exp(-x)))
operation = staticmethod(lambda x: 1 / (1 + np.ma.exp(-x)))
class Sin(Activation):
op = 'Sin'
operation = staticmethod(lambda x: np.sin(x))
operation = staticmethod(lambda x: np.ma.sin(x))
class Sinh(Activation):
op = 'Sinh'
operation = staticmethod(lambda x: np.sinh(x))
operation = staticmethod(lambda x: np.ma.sinh(x))
class Asin(Activation):
op = 'Asin'
operation = staticmethod(lambda x: np.arcsin(x))
operation = staticmethod(lambda x: np.ma.arcsin(x))
class Asinh(Activation):
@@ -61,44 +61,44 @@ class Asinh(Activation):
class Cos(Activation):
op = 'Cos'
operation = staticmethod(lambda x: np.cos(x))
operation = staticmethod(lambda x: np.ma.cos(x))
class Cosh(Activation):
op = 'Cosh'
operation = staticmethod(lambda x: np.cosh(x))
operation = staticmethod(lambda x: np.ma.cosh(x))
class Acos(Activation):
op = 'Acos'
operation = staticmethod(lambda x: np.arccos(x))
operation = staticmethod(lambda x: np.ma.arccos(x))
class Acosh(Activation):
op = 'Acosh'
version = 'opset4'
operation = staticmethod(lambda x: np.arccosh(x))
operation = staticmethod(lambda x: np.ma.arccosh(x))
class Tan(Activation):
op = 'Tan'
operation = staticmethod(lambda x: np.tan(x))
operation = staticmethod(lambda x: np.ma.tan(x))
class Tanh(Activation):
op = 'Tanh'
operation = staticmethod(lambda x: np.tanh(x))
operation = staticmethod(lambda x: np.ma.tanh(x))
class Atan(Activation):
op = 'Atan'
operation = staticmethod(lambda x: np.arctan(x))
operation = staticmethod(lambda x: np.ma.arctan(x))
class Atanh(Activation):
op = 'Atanh'
version = 'opset4'
operation = staticmethod(lambda x: np.arctanh(x))
operation = staticmethod(lambda x: np.ma.arctanh(x))
class ReLU6(AttributedClamp):
@@ -110,12 +110,12 @@ class ReLU6(AttributedClamp):
class Exp(Activation):
op = 'Exp'
operation = staticmethod(lambda x: np.exp(x))
operation = staticmethod(lambda x: np.ma.exp(x))
class ReLU(Activation):
op = 'ReLU'
operation = staticmethod(lambda x: np.maximum(0, x))
operation = staticmethod(lambda x: np.ma.maximum(0, x))
class Erf(Activation):
@@ -125,17 +125,17 @@ class Erf(Activation):
class Floor(Activation):
op = 'Floor'
operation = staticmethod(lambda x: np.floor(x))
operation = staticmethod(lambda x: np.ma.floor(x))
class Ceiling(Activation):
op = 'Ceiling'
operation = staticmethod(lambda x: np.ceil(x))
operation = staticmethod(lambda x: np.ma.ceil(x))
class Abs(Activation):
op = 'Abs'
operation = staticmethod(lambda x: np.abs(x))
operation = staticmethod(lambda x: np.ma.abs(x))
class Sign(Activation):
@@ -156,7 +156,7 @@ class Elu(Activation):
values = values.astype(float)
for index, x in np.ndenumerate(values):
if x < 0:
values[index] = alpha * (np.exp(x) - 1)
values[index] = alpha * (np.ma.exp(x) - 1)
return values
@classmethod
@@ -226,7 +226,7 @@ class LogicalNot(Activation):
not_attrs.update(attrs)
super().__init__(graph, not_attrs)
operation = staticmethod(lambda x: np.logical_not(x))
operation = staticmethod(lambda x: np.ma.logical_not(x))
@staticmethod
def type_infer(node: Node):
@@ -235,31 +235,31 @@ class LogicalNot(Activation):
class Log(Activation):
op = 'Log'
operation = staticmethod(lambda x: np.log(x))
operation = staticmethod(lambda x: np.ma.log(x))
class SoftPlus(Activation):
op = 'SoftPlus'
version = 'opset4'
operation = staticmethod(lambda x: np.log(np.exp(x) + 1.0))
operation = staticmethod(lambda x: np.ma.log(np.ma.exp(x) + 1.0))
class Mish(Activation):
op = 'Mish'
version = 'opset4'
operation = staticmethod(lambda x: x * np.tanh(np.log(np.exp(x) + 1.0)))
operation = staticmethod(lambda x: x * np.ma.tanh(np.ma.log(np.ma.exp(x) + 1.0)))
class HSwish(Activation):
op = 'HSwish'
version = 'opset4'
operation = staticmethod(lambda x: x * np.minimum(np.maximum(x + 3.0, 0.0), 6.0) / 6.0)
operation = staticmethod(lambda x: x * np.ma.minimum(np.ma.maximum(x + 3.0, 0.0), 6.0) / 6.0)
class HSigmoid(Activation):
op = 'HSigmoid'
version = 'opset5'
operation = staticmethod(lambda x: np.minimum(np.maximum(x + 3.0, 0.0), 6.0) / 6.0)
operation = staticmethod(lambda x: np.ma.minimum(np.ma.maximum(x + 3.0, 0.0), 6.0) / 6.0)
class Swish(Op):

View File

@@ -6,7 +6,6 @@ import logging as log
import numpy as np
from mo.front.caffe.extractors.utils import get_canonical_axis_index
from mo.front.common.partial_infer.utils import int64_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op, PermuteAttrs
@@ -36,7 +35,7 @@ def arg_ops_infer(node: Node):
if node.has_valid('axis'):
axis = get_canonical_axis_index(shape, node.axis)
node.axis = axis
out_shape = int64_array(shape)
out_shape = shape.copy()
out_shape[axis] = node.top_k
PermuteAttrs.create_permute_attrs(node, attrs=[('axis', 'input:0')])
else:

View File

@@ -19,7 +19,7 @@ class Assert(Op):
@staticmethod
def assert_infer(node: Node):
assert_value = node.in_node(0).value
node.out_node().value = assert_value
node.out_node().value = assert_value.copy()
node.out_node().shape = []
@staticmethod

View File

@@ -8,14 +8,15 @@ from mo.utils.utils import refer_to_faq_msg
class BoxNms(Op):
''' It is assumed that there is no equivalent of this op in IE.
'''
"""
It is assumed that there is no equivalent of this op in IE.
"""
op = '_contrib_box_nms'
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'coord_start': 2,
'force_suppress': False,
'id_index': 0,
@@ -23,7 +24,7 @@ class BoxNms(Op):
'score_index': 1,
'topk': 400,
'valid_thresh': 0.01,
'infer': __class__.infer
'infer': self.infer
}
super().__init__(graph, mandatory_props, attrs)

View File

@@ -15,8 +15,8 @@ class Bucketize(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'kind': 'op',
'type': __class__.op,
'op': __class__.op,
'type': self.op,
'op': self.op,
'version': 'opset3',
'type_infer': self.type_infer,
@@ -44,7 +44,8 @@ class Bucketize(Op):
node.out_port(0).set_data_type(np.int32)
else:
assert node.output_type in [np.int64, np.int32], \
'Bucketize `output_type` attribute must be int32 or int64, `{}` found'.format(np.dtype(node.output_type).name)
'Bucketize `output_type` attribute must be int32 or int64, `{}` found' \
''.format(np.dtype(node.output_type).name)
node.out_port(0).set_data_type(node.output_type)
@staticmethod
@@ -54,14 +55,11 @@ class Bucketize(Op):
"Attribute \"with_right_bound\" is not defined"
assert len(node.in_nodes()) == 2, \
"Incorrect number of inputs for {} node".format(node.id)
if node.get_opset() == "extension":
output_type = np.int32
else:
if node.get_opset() != "extension":
assert node.has_valid('output_type'), \
'`output_type` attribute is not set for Bucketize node `{}`'.format(node_name)
assert node.output_type in [np.int64, np.int32], \
'Bucketize `output_type` attribute must be int32 or int64, `{}` found'.format(np.dtype(node.output_type).name)
output_type = node.output_type
output_shape = node.in_port(0).data.get_shape()
node.out_port(0).data.set_shape(output_shape)

View File

@@ -3,6 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import is_fully_defined
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -14,15 +15,16 @@ class ConstantFill(Op):
so it is usually relevant to constant folding.
"""
op = 'ConstantFill'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': None,
'op': __class__.op,
'op': self.op,
'input_as_shape': 1,
'in_ports_count': 1,
'out_ports_count': 1,
'infer': __class__.infer
'infer': self.infer
}
super().__init__(graph, mandatory_props, attrs)
@@ -38,8 +40,10 @@ class ConstantFill(Op):
assert node.fill_value is not None
assert node.input_as_shape
shape = node.in_node(0).value
shape = node.in_port(0).data.get_value()
assert shape is not None
node.out_node(0).value = np.full(shape, node.fill_value, np.float32)
node.out_node(0).shape = np.array(node.out_node(0).value.shape, dtype=np.int64)
if is_fully_defined(shape):
node.out_port(0).data.set_value(np.full(shape, node.fill_value, np.float32))
else:
node.out_port(0).data.set_shape(shape)

View File

@@ -1,66 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from math import ceil
# Concat infer : N - number of inputs to concat
# axis - dimension number for tensors concatenation
import numpy as np
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class CorrelationOp(Op):
op = 'Correlation'
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': __class__.op,
'op': __class__.op,
'version': 'extension',
'in_ports_count': 1,
'out_ports_count': 1,
'infer': CorrelationOp.corr_infer
}
super().__init__(graph, mandatory_props, attrs)
def supported_attrs(self):
return [
'pad',
'kernel_size',
'max_displacement',
'stride_1',
'stride_2',
'single_direction',
'do_abs',
'correlation_type'
]
@staticmethod
def corr_infer(node: Node):
outn = node.out_node(0)
inn = node.in_node(0)
outn.shape = np.zeros(4, dtype=int)
outn.shape[0] = inn.shape[0]
bottomchannels = inn.shape[1]
paddedbottomheight = inn.shape[2]
paddedbottomwidth = inn.shape[3] + 2 * node.pad
kernel_radius_ = (node.kernel_size - 1) / 2;
border_size_ = node.max_displacement + kernel_radius_
outn.shape[3] = ceil((float)(paddedbottomwidth - border_size_ * 2) / node.stride_1)
outn.shape[2] = ceil((float)(paddedbottomheight - kernel_radius_ * 2) / node.stride_1)
neighborhood_grid_radius_ = node.max_displacement / node.stride_2
if node.single_direction != 0:
neighborhood_grid_width_ = neighborhood_grid_radius_ + 1
else:
neighborhood_grid_width_ = neighborhood_grid_radius_ * 2 + 1
outn.shape[1] = neighborhood_grid_width_ * neighborhood_grid_width_

View File

@@ -1,7 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, compatible_dims
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -44,21 +44,13 @@ class CTCGreedyDecoderOp(Op):
# check shapes of input tensors
assert len(logits_shape) == 3, \
'Incorrect rank of logits for {} node'.format(node_name)
if node.has_valid('use_mask_format') and node.use_mask_format is True:
# it is a case when CTCGreedyDecoder still uses an original format for sequence_length
assert len(sequence_mask_shape) == 1, \
'Incorrect rank of sequence length tensor for {} node'.format(node_name)
assert logits_shape[1] == sequence_mask_shape[0], \
'Batch dimensions of input tensors must be the same for {} node'.format(node_name)
else:
# it is a case when CTCGreedyDecoder uses a sequence mask
assert len(sequence_mask_shape) == 2, \
'Incorrect rank of sequence length tensor for {} node'.format(node_name)
assert logits_shape[1] == sequence_mask_shape[1], \
'Batch dimensions of input tensors must be the same for {} node'.format(node_name)
assert logits_shape[0] == sequence_mask_shape[0], \
'Time dimensions of input tensors must be the same for {} node'.format(node_name)
assert len(sequence_mask_shape) == 2, \
'Incorrect rank of sequence length tensor for {} node'.format(node_name)
assert compatible_dims(logits_shape[1], sequence_mask_shape[1]), \
'Batch dimensions of input tensors must be the same for {} node'.format(node_name)
assert compatible_dims(logits_shape[0], sequence_mask_shape[0]), \
'Time dimensions of input tensors must be the same for {} node'.format(node_name)
batch_size = logits_shape[1]
time_size = logits_shape[0]
node.out_port(0).data.set_shape(int64_array([batch_size, time_size, 1, 1]))
node.out_port(0).data.set_shape([batch_size, time_size, 1, 1])

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, compatible_dims
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.middle.passes.convert_data_type import np_data_type_to_destination_type
@@ -68,12 +68,12 @@ class CTCGreedyDecoderSeqLenOp(Op):
assert len(sequence_len_shape) == 1, \
'Incorrect rank of sequence length tensor for {} node'.format(node_name)
assert logits_shape[0] == sequence_len_shape[0], \
assert compatible_dims(logits_shape[0], sequence_len_shape[0]), \
'Batch dimensions of input tensors must be the same for {} node'.format(node_name)
batch_size = logits_shape[0]
time_size = logits_shape[1]
if node.is_out_port_connected(0):
node.out_port(0).data.set_shape(int64_array([batch_size, time_size]))
node.out_port(0).data.set_shape([batch_size, time_size])
if node.is_out_port_connected(1):
node.out_port(1).data.set_shape(int64_array([batch_size]))
node.out_port(1).data.set_shape([batch_size])

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, compatible_dims
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -73,11 +73,12 @@ class CTCLoss(Op):
assert len(logits_shape) == 3 and len(logit_length_shape) == 1 and len(labels_shape) == 2\
and len(label_length_shape) == 1 and len(blank_index_shape) == 0, \
'Incorrect rank of some input tensor for {} node'.format(node_name)
assert logits_shape[0] == logit_length_shape[0] and logits_shape[0] == labels_shape[0]\
and logits_shape[0] == label_length_shape[0], \
assert compatible_dims(logits_shape[0], logit_length_shape[0]) and \
compatible_dims(logits_shape[0], labels_shape[0]) and \
compatible_dims(logits_shape[0], label_length_shape[0]), \
'Batch dimensions of input tensors must be the same for {} node'.format(node_name)
assert logits_shape[1] == labels_shape[1], \
assert compatible_dims(logits_shape[1], labels_shape[1]), \
'Time dimensions of input tensors must be the same for {} node'.format(node_name)
batch_size = logits_shape[0]
node.out_port(0).data.set_shape(int64_array([batch_size]))
node.out_port(0).data.set_shape([batch_size])

View File

@@ -1,53 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Concat infer : N - number of inputs to concat
# axis - dimension number for tensors concatenation
import copy
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class DataAugmentationOp(Op):
op = 'DataAugmentation'
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': __class__.op,
'op': __class__.op,
'version': 'extension',
'in_ports_count': 1,
'out_ports_count': 1,
'infer': DataAugmentationOp.data_augmentation_infer
}
super().__init__(graph, mandatory_props, attrs)
def supported_attrs(self):
return [
'crop_width',
'crop_height',
'write_augmented',
'max_multiplier',
'augment_during_test',
'recompute_mean',
'write_mean',
'mean_per_pixel',
'mean',
'mode',
'bottomwidth',
'bottomheight',
'num',
'chromatic_eigvec'
]
@staticmethod
def data_augmentation_infer(node: Node):
outn = node.out_node(0)
inn = node.in_node(0)
outn.shape = copy.copy(inn.shape)
if node.crop_width != 0 or node.crop_height != 0:
outn.shape[2] = node.crop_height
outn.shape[3] = node.crop_width

View File

@@ -4,7 +4,7 @@
import numpy as np
from mo.front.common.layout import shape_for_layout, get_height_dim, get_batch_dim, get_features_dim, get_width_dim
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import dynamic_dimension, is_fully_defined
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
from mo.utils.error import Error
@@ -33,7 +33,7 @@ class DepthToSpaceOp(Op):
@staticmethod
def infer(node: Node):
in_shape = node.in_node().shape
in_shape = node.in_port(0).data.get_shape()
if in_shape.size != 4:
raise Error('TensorFlow DepthToSpace operation is supported for 4D \'NHWC\' input layout only. '
'Current input shape is \'{}\''.format(in_shape))
@@ -46,16 +46,18 @@ class DepthToSpaceOp(Op):
C = in_shape[get_features_dim(layout, 4)]
block_size = node['block_size']
if C % (block_size ** 2):
if C is not dynamic_dimension and C % (block_size ** 2):
raise Error('Feature dimensions of input tensor of DepthToSpace operation have to be divisible by square '
'of DepthToSpace \'block_size\' parameter. Input tensor shape = {}. Feature dimension = {}. '
'block_size = {}'.format(in_shape, C, block_size))
out_shape = shape_for_layout(layout,
batch=N,
features=int(C / (block_size ** 2)),
height=int(H * block_size),
width=int(W * block_size))
features=C // (block_size * block_size),
height=H * block_size,
width=W * block_size)
assert np.prod(in_shape) == np.prod(out_shape)
node.out_node().shape = int64_array(out_shape)
if is_fully_defined(in_shape) and is_fully_defined(out_shape) and np.prod(in_shape) != np.prod(out_shape):
raise Error('Number of input elements "{}" is not equal to number of output elements "" for node "{}"'
''.format(in_shape, out_shape, node.soft_get('name', node.id)))
node.out_port(0).data.set_shape(out_shape)

View File

@@ -3,7 +3,6 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.ops.op import Op
@@ -13,10 +12,10 @@ class ExperimentalDetectronDetectionOutput(Op):
def __init__(self, graph, attrs):
mandatory_props = dict(
type=__class__.op,
op=__class__.op,
type=self.op,
op=self.op,
version='opset6',
infer=__class__.infer,
infer=self.infer,
type_infer=self.type_infer,
in_ports_count=4,
out_ports_count=3,
@@ -39,13 +38,13 @@ class ExperimentalDetectronDetectionOutput(Op):
def infer(node):
rois_num = node.max_detections_per_image
# boxes
node.out_node(0).shape = np.array([rois_num, 4], dtype=np.int64)
node.out_port(0).data.set_shape([rois_num, 4])
# classes, scores, batch indices
# We use range(1, 1 + max(node.out_ports().keys())) instead of range(1, 3), because there are incorrectly
# generated models where ExperimentalDetectronDetectionOutput has 4 outputs.
for port_ind in range(1, 1 + max(node.out_ports().keys())):
if not node.out_port(port_ind).disconnected():
node.out_port(port_ind).data.set_shape(int64_array([rois_num]))
node.out_port(port_ind).data.set_shape([rois_num])
@staticmethod
def type_infer(node):

View File

@@ -41,7 +41,7 @@ class FFTBase(Op):
assert (input_rank - 1) not in axes, '(I)DFT node {} axes cannot contain the last axis'.format(node_name)
assert len(set(axes)) == len(axes), '(I)DFT node {} axes must be unique.'.format(node_name)
output_shape = int64_array(src_shape)
output_shape = src_shape.copy()
if node.is_in_port_connected(2):
signal_size = FFTBase.get_signal_size(node)
signal_size = FFTBase.canonicalize_signal_size(signal_size, axes, src_shape)

View File

@@ -5,7 +5,7 @@ import re
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, shape_array
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
from mo.utils.broadcasting import bi_directional_shape_broadcasting
@@ -240,7 +240,7 @@ class Einsum(Op):
dim_ind += num_broadcasted_dims
else:
dim_size = input_shape[dim_ind]
sub_shape = int64_array([dim_size])
sub_shape = shape_array([dim_size])
assert label not in label_to_shape.keys() or np.array_equal(label_to_shape[label], sub_shape), \
"Sizes of dimensions with the same label of Einsum node {} " \
"must be compatible".format(node_name)
@@ -249,12 +249,12 @@ class Einsum(Op):
label_ind += 1
# generate output shape based on the output subscript
output_shape = int64_array([])
output_shape = shape_array([])
labels = Einsum.extract_subscript_labels(node_name, output_subscript)
for label in labels:
assert label in label_to_shape.keys(), "The label in the output subscript must appear" \
" in input subscripts in equation {} " \
"of Einsum node {}".format(equation, node_name)
output_shape = np.concatenate((output_shape, label_to_shape[label]))
output_shape = np.ma.concatenate((output_shape, label_to_shape[label]))
node.out_port(0).data.set_shape(output_shape)

View File

@@ -147,49 +147,49 @@ class LogicalElementwise(Elementwise):
class Greater(LogicalElementwise):
op = 'Greater'
op_type = 'Greater'
operation = staticmethod(lambda a, b: a > b)
operation = staticmethod(lambda a, b: np.ma.greater(a, b))
class GreaterEqual(LogicalElementwise):
op = 'GreaterEqual'
op_type = 'GreaterEqual'
operation = staticmethod(lambda a, b: a >= b)
operation = staticmethod(lambda a, b: np.ma.greater_equal(a, b))
class Less(LogicalElementwise):
op = 'Less'
op_type = 'Less'
operation = staticmethod(lambda a, b: a < b)
operation = staticmethod(lambda a, b: np.ma.less(a, b))
class LessEqual(LogicalElementwise):
op = 'LessEqual'
op_type = 'LessEqual'
operation = staticmethod(lambda a, b: a <= b)
operation = staticmethod(lambda a, b: np.ma.less_equal(a, b))
class Equal(LogicalElementwise):
op = 'Equal'
op_type = 'Equal'
operation = staticmethod(lambda a, b: a == b)
operation = staticmethod(lambda a, b: np.ma.equal(a, b))
class NotEqual(LogicalElementwise):
op = 'NotEqual'
op_type = 'NotEqual'
operation = staticmethod(lambda a, b: a != b)
operation = staticmethod(lambda a, b: np.ma.not_equal(a, b))
class Maximum(Elementwise):
op = 'Maximum'
op_type = 'Maximum'
operation = staticmethod(lambda a, b: np.maximum(a, b))
operation = staticmethod(lambda a, b: np.ma.maximum(a, b))
class Minimum(Elementwise):
op = 'Minimum'
op_type = 'Minimum'
operation = staticmethod(lambda a, b: np.minimum(a, b))
operation = staticmethod(lambda a, b: np.ma.minimum(a, b))
class Round(UnaryElementwise):
@@ -218,36 +218,42 @@ class Round(UnaryElementwise):
node.soft_get('mode'))
if node.mode == 'half_away_from_zero':
mask = (a >= 0)
out = np.empty_like(a)
out[mask] = np.floor(a[mask] + 0.5)
out[~mask] = np.ceil(a[~mask] - 0.5)
out = np.ma.empty_like(a)
out[mask] = np.ma.floor(a[mask] + 0.5)
out[~mask] = np.ma.ceil(a[~mask] - 0.5)
else:
out = np.round(a)
out = np.ma.round(a)
node.out_port(0).data.set_value(out)
class LogicalOr(LogicalElementwise):
op = 'LogicalOr'
op_type = 'LogicalOr'
operation = staticmethod(lambda a, b: np.logical_or(a, b))
operation = staticmethod(lambda a, b: np.ma.logical_or(a, b))
class LogicalXor(Elementwise):
op = 'LogicalXor'
op_type = 'LogicalXor'
operation = staticmethod(lambda a, b: np.logical_xor(a, b))
operation = staticmethod(lambda a, b: np.ma.logical_xor(a, b))
class LogicalAnd(LogicalElementwise):
op = 'LogicalAnd'
op_type = 'LogicalAnd'
operation = staticmethod(lambda a, b: np.logical_and(a, b))
operation = staticmethod(lambda a, b: np.ma.logical_and(a, b))
class FloorMod(Elementwise):
op = 'FloorMod'
op_type = 'FloorMod'
operation = staticmethod(lambda a, b: a % b)
operation = staticmethod(lambda a, b: np.ma.fmod(a, b))
class Mod(Elementwise):
op = 'Mod'
op_type = 'Mod'
operation = staticmethod(lambda a, b: np.ma.mod(a, b))
class Negative(UnaryElementwise):

View File

@@ -52,7 +52,7 @@ class EmbeddingBagOffsetsSum(EmbeddingBagBase):
assert offsets_shape is not None and len(offsets_shape) == 1,\
"Rank of the offsets in EmbeddingBagOffsetsSum should be equal to 1 for node: `{}`".format(name)
node.out_port(0).data.set_shape(np.concatenate((offsets_shape[:1], weights_shape[1:])))
node.out_port(0).data.set_shape(np.ma.concatenate((offsets_shape[:1], weights_shape[1:])))
class EmbeddingBagPackedSum(EmbeddingBagBase):
@@ -74,7 +74,7 @@ class EmbeddingBagPackedSum(EmbeddingBagBase):
"EmbeddingBagPackedSum should have at least 2D weights for node: `{}`".format(name)
input_shape = node.in_port(1).data.get_shape()
node.out_port(0).data.set_shape(np.concatenate((input_shape[:1], weights_shape[1:])))
node.out_port(0).data.set_shape(np.ma.concatenate((input_shape[:1], weights_shape[1:])))
class EmbeddingSegmentsSum(EmbeddingBagBase):
@@ -101,5 +101,5 @@ class EmbeddingSegmentsSum(EmbeddingBagBase):
num_segments = node.in_port(3).data.get_value()
assert num_segments is not None, "EmbeddingSegmentsSum should have a constant num_segments provided, but it " \
"doesn't for node: `{}`.".format(name)
output_shape = np.concatenate(([num_segments], weights_shape[1:]))
output_shape = np.ma.concatenate(([num_segments], weights_shape[1:]))
node.out_port(0).data.set_shape(output_shape)

View File

@@ -4,7 +4,7 @@
import numpy as np
from mo.front.caffe.extractors.utils import get_canonical_axis_index
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, is_fully_defined
from mo.graph.graph import Node, Graph
from mo.ops.op import Op, PermuteAttrs
from mo.utils.error import Error
@@ -67,7 +67,7 @@ class Gather(Op):
axis = axis + len(data_shape) if axis < 0 else axis
batch_dims = batch_dims + len(indices_shape) if batch_dims < 0 else batch_dims
assert np.array_equal(data_shape[:batch_dims], indices_shape[:batch_dims]), \
assert np.ma.allequal(data_shape[:batch_dims], indices_shape[:batch_dims]), \
'data and indices inputs must have equal first dimensions until batch_dims'
assert batch_dims <= axis, \
@@ -82,16 +82,17 @@ class Gather(Op):
data_value = node.in_port(0).data.get_value()
indices_value = node.in_port(1).data.get_value()
if data_value is not None and indices_value is not None:
if data_value is not None and indices_value is not None and is_fully_defined(indices_value):
if batch_dims == 0:
node.out_port(0).data.set_value(np.take(data_value, indices_value, axis))
node.out_port(0).data.set_value(np.ma.take(data_value, indices_value, axis))
else:
out_value = np.empty(out_shape)
for batch_idx in np.ndindex(tuple(batch_dims_range)):
out_value[batch_idx] = np.take(data_value[batch_idx], indices_value[batch_idx], axis - batch_dims)
out_value[batch_idx] = np.ma.take(data_value[batch_idx], indices_value[batch_idx],
axis - batch_dims)
node.out_port(0).data.set_value(out_value)
else:
node.out_port(0).data.set_shape(int64_array(out_shape))
node.out_port(0).data.set_shape(out_shape)
class AttributedGather(Op):

View File

@@ -3,7 +3,8 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, is_fully_defined, dynamic_dimension_value, \
compatible_dims
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -47,7 +48,7 @@ class GatherND(Op):
# check that batch dimensions of data and indices are the same
for batch_dim in range(batch_dims):
assert data_shape[batch_dim] == indices_shape[batch_dim], \
assert compatible_dims(data_shape[batch_dim], indices_shape[batch_dim]), \
"The dimension {} for data and indices tensors must be the same".format(batch_dim)
# check ranks of input tensors
@@ -57,13 +58,19 @@ class GatherND(Op):
"Length of a tuple with indices must not exceed a rank of data tensor excluding batch dimensions"
# compute output shape
number_batches = [np.prod(data_shape[:batch_dims]).tolist()] if batch_dims > 0 else list()
if batch_dims > 0:
if is_fully_defined(data_shape[:batch_dims]):
batch = [np.prod(data_shape[:batch_dims]).tolist()]
else:
batch = [dynamic_dimension_value]
else:
batch = []
slice_shape = list(data_shape[(batch_dims + indices_shape[-1]):])
output_shape = number_batches + list(indices_shape[batch_dims:-1]) + slice_shape
node.out_port(0).data.set_shape(int64_array(output_shape))
output_shape = batch + list(indices_shape[batch_dims:-1]) + slice_shape
node.out_port(0).data.set_shape(output_shape)
# compute output value if all input values are defined
if data_value is not None and indices_value is not None:
if is_fully_defined(indices_value) and is_fully_defined(data_value):
output_value = np.zeros(output_shape, dtype=data_value.dtype)
if batch_dims == 0:
output_indices_range = int64_array(indices_shape[:-1])

View File

@@ -5,7 +5,7 @@ import math
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import int64_array, dynamic_dimension, dynamic_dimension_value
from mo.front.extractor import bool_to_str
from mo.graph.graph import Node, Graph
from mo.graph.perm_inputs import PermuteInputs
@@ -46,7 +46,10 @@ def infer_for_opset4(node: Node):
scales = node.in_port(2).data.get_value()
assert scales is not None
for i, axis in enumerate(axes):
output_shape[axis] = math.floor(scales[i] * output_shape[axis] + 1.0e-5)
if output_shape[axis] is not dynamic_dimension and scales[i] is not dynamic_dimension:
output_shape[axis] = math.floor(scales[i] * output_shape[axis] + 1.0e-5)
else:
output_shape[axis] = dynamic_dimension_value
if node.is_in_port_connected(3):
PermuteInputs().set_input_permutation(node.in_node(3), node, 'input:0', 'axis')

View File

@@ -6,7 +6,7 @@ import logging as log
import numpy as np
from extensions.ops.tensor_iterator import TensorIterator
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import shape_array, is_fully_defined, dynamic_dimension_value
from mo.graph.graph import Node, Graph
from mo.middle.passes.fusing.helpers import common_bfs
from mo.middle.passes.infer import partial_infer
@@ -99,7 +99,7 @@ class Loop(TensorIterator):
if body_node is not None:
assert body_node.soft_get('type') == 'Parameter'
input_shape = int64_array([]) # this is a current iteration number input shape
input_shape = shape_array([]) # this is a current iteration number input shape
loop_port_idx = record['external_port_id']
if loop_port_idx != -1:
input_shape = loop_node.in_port(loop_port_idx).get_connection().get_source().data.get_shape()
@@ -135,12 +135,7 @@ class Loop(TensorIterator):
assert output_shape[concat_axis] == 1, 'Dimension for concatenation is not equal to 1 for scan ' \
'output for Loop node "{}" for loop output port "{}"'.\
format(loop_name, loop_port_idx)
num_iters = Loop.iterations_count(loop_node)
if num_iters is None:
log.error('Dynamic number of iterations for Loop node "{}". Consider number to be 1 to be able'
' to generate the IR.'.format(loop_name), extra={'is_warning': True})
num_iters = 1
output_shape[concat_axis] = num_iters
output_shape[concat_axis] = Loop.iterations_count(loop_node)
# MO does not support evaluation of Loop scan outputs with const values
if concat_axis is None and output_value is not None:
loop_node.out_port(loop_port_idx).data.set_value(output_value)
@@ -153,22 +148,25 @@ class Loop(TensorIterator):
Try to determine the number of loop iterations. If we detect that the number is dynamic then return None.
:param loop_node: Loop operation node
:return: number of iterations or None if the number depends on runtime values.
:return: number of iterations or dynamic_dimensions if the number depends on runtime values.
"""
assert loop_node.soft_get('type') == 'Loop'
if loop_node.is_in_port_connected(1):
execution_condition = loop_node.in_port(1).data.get_value()
if execution_condition is None: # dynamic execution condition
return None
if not is_fully_defined(execution_condition): # dynamic execution condition
return dynamic_dimension_value
execution_condition = execution_condition.item()
if not execution_condition: # 0 iterations
return 0
num_iterations = loop_node.in_port(0).data.get_value()
if not is_fully_defined(num_iterations):
return dynamic_dimension_value
if num_iterations is not None:
num_iterations = num_iterations.item(0)
if num_iterations < 0:
return None
# in some ONNX models the num_iterations input is equal to max(int64) meaning dynamic number of iterations
if num_iterations < 0 or num_iterations == np.iinfo(np.int64).max:
return dynamic_dimension_value
return num_iterations
@staticmethod
@@ -511,7 +509,8 @@ class Loop(TensorIterator):
port_to_remove = port_map[record_id_to_remove]['external_port_id']
if port_to_remove != -1:
if dir == 'in':
if port_to_remove not in [0, 1] and port_to_remove in loop_node.in_ports().keys(): # input port 0 and 1 are mandatory for the Loop node
# input port 0 and 1 are mandatory for the Loop node
if port_to_remove not in [0, 1] and port_to_remove in loop_node.in_ports().keys():
loop_node.delete_input_port(port_to_remove)
elif dir == 'out' and port_to_remove in loop_node.out_ports():
loop_node.delete_output_port(port_to_remove)

View File

@@ -1,7 +1,7 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.common.partial_infer.utils import mark_input_bins
from mo.front.common.partial_infer.utils import mark_input_bins, compatible_dims
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
from mo.utils.error import Error
@@ -27,10 +27,10 @@ class LSTMCell(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': __class__.op,
'op': __class__.op,
'type': self.op,
'op': self.op,
'version': 'opset4',
'infer': __class__.infer,
'infer': self.infer,
'in_ports_count': 5,
'out_ports_count': 2,
'wr_input_id': 3,
@@ -85,4 +85,6 @@ class LSTMCell(Op):
input_shape = node.in_node(0).shape
assert input_shape is not None
assert hidden_shape[0] == cell_shape[0] == input_shape[0], 'States are not broadcastable by batch'
assert compatible_dims(hidden_shape[0], cell_shape[0]) and \
compatible_dims(cell_shape[0], input_shape[0]), 'States are not broadcast-able by batch for node {}' \
''.format(node.soft_get('name', node.id))

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import mark_input_bins
from mo.front.common.partial_infer.utils import mark_input_bins, shape_array, shape_insert
from mo.graph.graph import Node, add_opoutput, Graph
from mo.ops.op import Op
@@ -31,13 +31,13 @@ class LSTMSequence(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'type': '__LSTMSequence', # should be never emitted to IR; for debugging purposes
'op': __class__.op,
'type': None, # should be never emitted to IR; for debugging purposes
'op': self.op,
'blobs_wrb': False,
'has_num_directions': False,
'direction': 'forward',
'num_layers': 1,
'infer': __class__.infer,
'infer': self.infer,
'blob_bidirectional_split': lambda node: (
LSTMSequence.split_helper(node, 0, 'forward'),
LSTMSequence.split_helper(node, 1, 'reverse')
@@ -96,7 +96,7 @@ class LSTMSequence(Op):
node.in_node(port).value = np.repeat(node.in_node(port).value, input_shape[i], axis=i)
node.in_node(port).shape[i] = input_shape[i]
out_shape = np.array([input_shape[node.sequence_dim], input_shape[node.batch_dim], node.hidden_size], dtype=np.int64)
out_shape = shape_array([input_shape[node.sequence_dim], input_shape[node.batch_dim], node.hidden_size])
assert not node.has_num_directions or node.sequence_dim == 0, \
'If has_num_directions == True, then node.sequence_dim should be equal 0, but it is {}'.format(
node.sequence_dim)
@@ -104,12 +104,12 @@ class LSTMSequence(Op):
num_layers = node.num_layers
if node.has_num_directions:
# insert extra dimension to output shape for num_directions
out_shape = np.insert(out_shape, 1, np.int64(num_directions))
out_shape = shape_insert(out_shape, 1, np.int64(num_directions))
node.out_node(0).shape = out_shape
# extra outputs for hidden/cell states
state_size = np.array([input_shape[1], node.hidden_size], dtype=np.int64)
state_size = shape_array([input_shape[1], node.hidden_size])
if node.has_num_directions:
state_size = np.insert(state_size, 0, num_directions*num_layers)
state_size = shape_insert(state_size, 0, num_directions * num_layers)
for i in [1,2]:
if i not in node.out_nodes():
data_node = Op._create_data_node(

View File

@@ -3,7 +3,7 @@
import numpy as np
from mo.front.common.partial_infer.utils import int64_array
from mo.front.common.partial_infer.utils import compatible_shapes, shape_array, strict_compare_tensors
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
@@ -13,9 +13,9 @@ class Merge(Op):
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'op': __class__.op,
'infer': __class__.merge_infer,
'cf_infer': __class__.control_flow_infer,
'op': self.op,
'infer': self.merge_infer,
'cf_infer': self.control_flow_infer,
}
super().__init__(graph, mandatory_props, attrs)
@@ -30,21 +30,24 @@ class Merge(Op):
node['is_not_fully_inferred'] = True
else:
node['is_not_fully_inferred'] = False
assert np.all(node.shape == inferred_nodes[0].shape for node in inferred_nodes)
assert np.all(compatible_shapes(node.shape, inferred_nodes[0].shape) for node in inferred_nodes)
inferred_and_executable = [n for n in node.in_nodes().values() if n['is_partial_inferred'] and
'executable' in n and n['executable']]
tensor = inferred_and_executable[0]
if all([np.all(tensor.value == n.value) for n in inferred_and_executable]):
node.out_node().value = tensor.value.copy() if tensor.has_valid('value') else None
if all([tensor.has_valid('value') and n.has_valid('value') and strict_compare_tensors(tensor.value, n.value)
for n in inferred_and_executable]):
node.out_node().value = tensor.value.copy()
else:
node.out_node().value = None
node.out_node().shape = int64_array(tensor.shape)
# do not use set_shape(tensor.shape) here because input port shape may be different from the calculated output
# shape and `set_shape` will raise an error that shape has changed
node.out_node(0).shape = shape_array(tensor.shape)
@staticmethod
def control_flow_infer(node: Node, is_executable: bool, mark_executability: callable):
graph = node.graph
in_data_nodes = node.in_nodes(control_flow=True)
out_data_nodes = node.out_nodes(control_flow=True)

View File

@@ -1,11 +1,9 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from mo.front.caffe.extractors.utils import get_canonical_axis_index
from mo.front.common.layout import get_features_dim
from mo.front.common.partial_infer.elemental import copy_shape_infer
from mo.front.extractor import bool_to_str
from mo.graph.graph import Graph
from mo.graph.graph import Graph, Node
from mo.graph.perm_inputs import PermuteInputs
from mo.ops.op import Op
from mo.utils.error import Error
@@ -44,7 +42,7 @@ class MVN(Op):
raise Error('Unsupported MVN opset version "{}"'.format(version))
@staticmethod
def infer(node: None):
def infer(node: Node):
name = node.soft_get('name', node.id)
assert node.eps is not None, 'MVN required attribute `eps` unspecified for node {}'.format(name)

Some files were not shown because too many files have changed in this diff Show More