Group convolution backprop data (#4113)
* GroupConvolutionBackpropData: Added backend unit tests * GroupConvolutionBackpropData: Refactor SLT and added tests for 1D * GroupConvolutionBackpropData: Added Serialization tests * GroupConvolutionBackpropData: Added GroupConvolutionBackpropData reference implementation * GroupConvolutionBackpropData specification refactoring. * GroupConvolutionBackpropData: Added validation node checks for the op * GroupConvolutionBackpropData: Copyright year fixed * GroupConvolutionBackpropData: Enhanced output shape inference with dynamic shapes * GroupConvolutionBackpropData: Remove redefinition of helper variables * Spec refactoring: add ticks to types and layouts. * Minor refactoring. * GroupConvolutionBackpropData: Moved backend tests from GroupConvolution to corresponding file * GroupConvolutionBackpropData: Improved output shape inference for fully dynamic inputs * GroupConvolutionBackpropData: Clean up type_prop tests * Fix banner in GroupConvolution shared test class. Co-authored-by: ggalieroc <gabriele.galiero.casay@intel.com>
This commit is contained in:
@@ -4,47 +4,17 @@
|
||||
|
||||
**Category**: Convolution
|
||||
|
||||
**Short description**: Computes the gradients of a GroupConvolution operation with respect to the input. Also known as Deconvolution or Transposed Convolution.
|
||||
**Short description**: Computes 1D, 2D or 3D *GroupConvolutionBackpropData* of input and kernel tensors.
|
||||
|
||||
**Detailed description**:
|
||||
**Detailed description**: Splits input and filters into multiple groups, computes *ConvolutionBackpropData* on them and concatenates the results. It is equivalent to GroupConvolution and Convolution relationship.
|
||||
|
||||
GroupConvolutionBackpropData is similar to ConvolutionBackpropData but also specifies the group processing in a way similar to how GroupConvolution extends behavior of a regular Convolution operation.
|
||||
|
||||
GroupConvolutionBackpropData takes input tensor, weights tensor and output shape and computes output tensor of a given shape. The shape of the output can be specified as an input 1D integer tensor explicitly or determined according to other attributes implicitly. If the output shape is specified as an explicit input, shape of the output exactly matches the specified size and required amount of padding is computed.
|
||||
|
||||
GroupConvolutionBackpropData accepts the same set of attributes as a regular GroupConvolution operation, but they are interpreted in a "backward way", so they are applied to the output of GroupConvolutionBackpropData, but not to the input. Refer to a regular GroupConvolution operation for detailed description of each attribute.
|
||||
|
||||
Output shape when specified as an input `output_shape`, specifies only spatial dimensions. No batch or channel dimension should be passed along with H, W or other spatial dimensions. If `output_shape` is omitted, then `pads_begin`, `pads_end` or `auto_pad` are used to determine output spatial shape `[Y_1, Y_2, ..., Y_D]` by input spatial shape `[X_1, X_2, ..., X_D]` in the following way:
|
||||
|
||||
```
|
||||
if auto_pads != None:
|
||||
pads_begin[i] = 0
|
||||
pads_end[i] = 0
|
||||
|
||||
Y_i = stride[i] * (X_i - 1) + ((K_i - 1) * dilations[i] + 1) - pads_begin[i] - pads_end[i] + output_padding[i]
|
||||
```
|
||||
|
||||
where `K_i` filter kernel dimension along spatial axis `i`.
|
||||
|
||||
If `output_shape` is specified, `pads_begin` and `pads_end` are ignored, and `auto_pad` defines how to distribute padding amount around the tensor. In this case pads are determined based on the next formulas to correctly align input and output tensors (similar to ONNX definition at https://github.com/onnx/onnx/blob/master/docs/Operators.md#convtranspose):
|
||||
|
||||
```
|
||||
total_padding[i] = stride[i] * (X_i - 1) + ((K_i - 1) * dilations[i] + 1) - output_shape[i] + output_padding[i]
|
||||
if auto_pads != SAME_UPPER:
|
||||
pads_begin[i] = total_padding[i] // 2
|
||||
pads_end[i] = total_padding[i] - pads_begin[i]
|
||||
else:
|
||||
pads_end[i] = total_padding[i] // 2
|
||||
pads_begin[i] = total_padding[i] - pads_end[i]
|
||||
```
|
||||
|
||||
**Attributes**
|
||||
**Attributes**: The operation has the same attributes as a *ConvolutionBackpropData*. Number of groups is derived from the kernel shape.
|
||||
|
||||
* *strides*
|
||||
|
||||
* **Description**: *strides* has the same definition as *strides* for a regular Convolution but applied in the backward way, for the output tensor.
|
||||
* **Range of values**: positive integers
|
||||
* **Type**: int[]
|
||||
* **Type**: `int[]`
|
||||
* **Default value**: None
|
||||
* **Required**: *yes*
|
||||
|
||||
@@ -52,7 +22,7 @@ else:
|
||||
|
||||
* **Description**: *pads_begin* has the same definition as *pads_begin* for a regular Convolution but applied in the backward way, for the output tensor. May be omitted, in which case pads are calculated automatically.
|
||||
* **Range of values**: non-negative integers
|
||||
* **Type**: int[]
|
||||
* **Type**: `int[]`
|
||||
* **Default value**: None
|
||||
* **Required**: *yes*
|
||||
* **Note**: the attribute is ignored when *auto_pad* attribute is specified.
|
||||
@@ -61,7 +31,7 @@ else:
|
||||
|
||||
* **Description**: *pads_end* has the same definition as *pads_end* for a regular Convolution but applied in the backward way, for the output tensor. May be omitted, in which case pads are calculated automatically.
|
||||
* **Range of values**: non-negative integers
|
||||
* **Type**: int[]
|
||||
* **Type**: `int[]`
|
||||
* **Default value**: None
|
||||
* **Required**: *yes*
|
||||
* **Note**: the attribute is ignored when *auto_pad* attribute is specified.
|
||||
@@ -70,43 +40,82 @@ else:
|
||||
|
||||
* **Description**: *dilations* has the same definition as *dilations* for a regular Convolution but applied in the backward way, for the output tensor.
|
||||
* **Range of values**: positive integers
|
||||
* **Type**: int[]
|
||||
* **Type**: `int[]`
|
||||
* **Default value**: None
|
||||
* **Required**: *yes*
|
||||
|
||||
* *auto_pad*
|
||||
|
||||
* **Description**: *auto_pad* has the same definition as *auto_pad* for a regular Convolution but applied in the backward way, for the output tensor.
|
||||
* *explicit*: use explicit padding values from `pads_begin` and `pads_end`.
|
||||
* *same_upper (same_lower)* the input is padded to match the output size. In case of odd padding value an extra padding is added at the end (at the beginning).
|
||||
* *explicit* - use explicit padding values from *pads_begin* and *pads_end*.
|
||||
* *same_upper* - the input is padded to match the output size. In case of odd padding value an extra padding is added at the end.
|
||||
* *same_lower* - the input is padded to match the output size. In case of odd padding value an extra padding is added at the beginning.
|
||||
* *valid* - do not use padding.
|
||||
* **Type**: string
|
||||
* **Default value**: None
|
||||
* **Type**: `string`
|
||||
* **Default value**: explicit
|
||||
* **Required**: *no*
|
||||
* **Note**: *pads_begin* and *pads_end* attributes are ignored when *auto_pad* is specified.
|
||||
|
||||
* *output_padding*
|
||||
|
||||
* **Description**: *output_padding* adds additional amount of paddings per each spatial axis in the `output` tensor. It unlocks more elements in the output allowing them to be computed. Elements are added at the higher coordinate indices for the spatial dimensions. Number of elements in *output_padding* list matches the number of spatial dimensions in `data` and `output` tensors.
|
||||
* **Description**: *output_padding* adds additional amount of paddings per each spatial axis in the output tensor. It unlocks more elements in the output allowing them to be computed. Elements are added at the higher coordinate indices for the spatial dimensions. Number of elements in *output_padding* list matches the number of spatial dimensions in input and output tensors.
|
||||
* **Range of values**: non-negative integer values
|
||||
* **Type**: int[]
|
||||
* **Type**: `int[]`
|
||||
* **Default value**: all zeros
|
||||
* **Required**: *no*
|
||||
|
||||
**Inputs**:
|
||||
|
||||
* **1**: `data` -- input tensor of rank 3 or greater. Layout is `[N, C_INPUT * GROUPS, X1, ..., XD]`, where `GROUPS` is the number of groups that is specified as a dedicated dimension in `filter` input. *Required*.
|
||||
* **1**: Input tensor of type `T1` and rank 3, 4 or 5. Layout is `NCZYX` (number of batches, number of channels, spatial axes Z, Y, X). Required.
|
||||
|
||||
* **2**: `filter` -- convolution kernel tensor. Weights have shape `[GROUPS, C_INPUT, C_OUTPUT, K_D, ..., K_1]`. `C_INPUT` is the number of channels in input `data` tensor shape, and `C_OUTPUT` is the number of channels in the `output` tensor. `GROUPS` is the number of groups in input/output channel dimension. Spatial size of the kernel `[K_D, ..., K_1]` is derived from the shape of this input and not specified by any attribute. *Required*.
|
||||
|
||||
* **3**: `output_shape` is 1D integer tensor that specifies spatial shape of the output. *Optional*. If specified, *padding amount* is deduced from relation of input and output spatial shapes according to formulas in the description. If not specified, *output shape* is calculated based on the `pads_begin` and `pads_end` or completely according to `auto_pad`.
|
||||
* **2**: Kernel tensor of type `T1` and rank 4, 5 or 6. Layout is `GOIZYX` (number of groups, number of output channels, number of input channels, spatial axes Z, Y, X). Required.
|
||||
|
||||
* **3**: Output shape tensor of type `T2` and rank 1. It specifies spatial shape of the output. Optional.
|
||||
* **Note** Number of groups is derived from the shape of the kernel and not specified by any attribute.
|
||||
* **Note**: Type of the convolution (1D, 2D or 3D) is derived from the rank of the input tensors and not specified by any attribute:
|
||||
* 1D convolution (input tensors rank 3) means that there is only one spatial axis X
|
||||
* 2D convolution (input tensors rank 4) means that there are two spatial axes Y, X
|
||||
* 3D convolution (input tensors rank 5) means that there are three spatial axes Z, Y, X
|
||||
|
||||
**Outputs**:
|
||||
|
||||
* **1**: `output` -- output tensor of the same rank as input `data` tensor and shape `[N, GROUPS * C_OUTPUT, Y1, ..., YD]`, where `GROUPS` is the number of groups that is specified as a dedicated dimension in `filter` input.
|
||||
* **1**: Output tensor of type `T1` and rank 3, 4 or 5 (the same as input *1*). Layout is `NOZYX` (number of batches, number of kernel output channels, spatial axes Z, Y, X).
|
||||
|
||||
**Types**:
|
||||
|
||||
* *T1*: any floating point type.
|
||||
* *T2*: any integer type.
|
||||
|
||||
**Example**
|
||||
|
||||
1D GroupConvolutionBackpropData
|
||||
```xml
|
||||
<layer id="5" name="upsampling_node" type="GroupConvolutionBackpropData">
|
||||
<data dilations="1" pads_begin="1" pads_end="1" strides="2"/>
|
||||
<input>
|
||||
<port id="0">
|
||||
<dim>1</dim>
|
||||
<dim>20</dim>
|
||||
<dim>224</dim>
|
||||
</port>
|
||||
<port id="1">
|
||||
<dim>4</dim>
|
||||
<dim>5</dim>
|
||||
<dim>2</dim>
|
||||
<dim>3</dim>
|
||||
</port>
|
||||
</input>
|
||||
<output>
|
||||
<port id="0" precision="FP32">
|
||||
<dim>1</dim>
|
||||
<dim>8</dim>
|
||||
<dim>447</dim>
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
```
|
||||
|
||||
2D GroupConvolutionBackpropData
|
||||
```xml
|
||||
<layer id="5" name="upsampling_node" type="GroupConvolutionBackpropData">
|
||||
<data dilations="1,1" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
|
||||
@@ -135,3 +144,36 @@ else:
|
||||
</output>
|
||||
</layer>
|
||||
```
|
||||
|
||||
3D GroupConvolutionBackpropData
|
||||
```xml
|
||||
<layer id="5" name="upsampling_node" type="GroupConvolutionBackpropData">
|
||||
<data dilations="1,1,1" pads_begin="1,1,1" pads_end="1,1,1" strides="2,2,2"/>
|
||||
<input>
|
||||
<port id="0">
|
||||
<dim>1</dim>
|
||||
<dim>20</dim>
|
||||
<dim>224</dim>
|
||||
<dim>224</dim>
|
||||
<dim>224</dim>
|
||||
</port>
|
||||
<port id="1">
|
||||
<dim>4</dim>
|
||||
<dim>5</dim>
|
||||
<dim>2</dim>
|
||||
<dim>3</dim>
|
||||
<dim>3</dim>
|
||||
<dim>3</dim>
|
||||
</port>
|
||||
</input>
|
||||
<output>
|
||||
<port id="0" precision="FP32">
|
||||
<dim>1</dim>
|
||||
<dim>8</dim>
|
||||
<dim>447</dim>
|
||||
<dim>447</dim>
|
||||
<dim>447</dim>
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
```
|
||||
|
||||
@@ -0,0 +1,56 @@
|
||||
// Copyright (C) 2021 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include "shared_test_classes/single_layer/group_convolution_backprop_data.hpp"
|
||||
|
||||
using namespace LayerTestsDefinitions;
|
||||
|
||||
namespace {
|
||||
|
||||
TEST_P(GroupConvBackpropDataLayerTest, Serialize) {
|
||||
Serialize();
|
||||
}
|
||||
|
||||
const std::vector<InferenceEngine::Precision> precisions = {
|
||||
InferenceEngine::Precision::FP64, InferenceEngine::Precision::FP32,
|
||||
InferenceEngine::Precision::FP16, InferenceEngine::Precision::BF16,
|
||||
InferenceEngine::Precision::I8, InferenceEngine::Precision::I16,
|
||||
InferenceEngine::Precision::I32, InferenceEngine::Precision::I64,
|
||||
InferenceEngine::Precision::U8, InferenceEngine::Precision::U16,
|
||||
InferenceEngine::Precision::U32, InferenceEngine::Precision::U64,
|
||||
};
|
||||
const std::vector<std::vector<size_t>> kernels = {{3, 3}};
|
||||
const std::vector<std::vector<size_t>> strides = {{1, 1}};
|
||||
const std::vector<std::vector<ptrdiff_t>> padBegins = {{0, 0}};
|
||||
const std::vector<std::vector<ptrdiff_t>> padEnds = {{0, 0}};
|
||||
const std::vector<std::vector<size_t>> dilations = {{1, 1}};
|
||||
const std::vector<size_t> numOutChannels = {8, 16};
|
||||
const std::vector<size_t> numGroups = {2, 8};
|
||||
const std::vector<ngraph::op::PadType> pad_types = {
|
||||
ngraph::op::PadType::EXPLICIT, ngraph::op::PadType::VALID,
|
||||
ngraph::op::PadType::SAME_LOWER, ngraph::op::PadType::SAME_UPPER};
|
||||
const auto inputShapes = std::vector<size_t>({1, 16, 30, 30});
|
||||
|
||||
const auto groupConvBackpropData2DParams = ::testing::Combine(
|
||||
::testing::ValuesIn(kernels), ::testing::ValuesIn(strides),
|
||||
::testing::ValuesIn(padBegins), ::testing::ValuesIn(padEnds),
|
||||
::testing::ValuesIn(dilations), ::testing::ValuesIn(numOutChannels),
|
||||
::testing::ValuesIn(numGroups), ::testing::ValuesIn(pad_types));
|
||||
|
||||
INSTANTIATE_TEST_CASE_P(
|
||||
smoke_GroupConvBackpropData2D_Serialization, GroupConvBackpropDataLayerTest,
|
||||
::testing::Combine(
|
||||
groupConvBackpropData2DParams,
|
||||
::testing::ValuesIn(precisions),
|
||||
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
|
||||
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
|
||||
::testing::Values(InferenceEngine::Layout::ANY),
|
||||
::testing::Values(InferenceEngine::Layout::ANY),
|
||||
::testing::Values(inputShapes),
|
||||
::testing::Values(CommonTestUtils::DEVICE_CPU)),
|
||||
GroupConvBackpropDataLayerTest::getTestCaseName);
|
||||
|
||||
} // namespace
|
||||
@@ -4,20 +4,76 @@
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include "single_layer_tests/group_convolution_backprop_data.hpp"
|
||||
#include "common_test_utils/test_constants.hpp"
|
||||
#include "single_layer_tests/group_convolution_backprop_data.hpp"
|
||||
|
||||
using namespace LayerTestsDefinitions;
|
||||
|
||||
namespace {
|
||||
|
||||
const std::vector<InferenceEngine::Precision> netPrecisions = {
|
||||
InferenceEngine::Precision::FP32
|
||||
InferenceEngine::Precision::FP32,
|
||||
InferenceEngine::Precision::FP16
|
||||
};
|
||||
|
||||
const std::vector<size_t> numOutChannels = {16, 32};
|
||||
const std::vector<size_t> numGroups = {2, 8, 16};
|
||||
|
||||
/* ============= 1D GroupConvolution ============= */
|
||||
const std::vector<std::vector<size_t >> inputShapes1D = {{1, 16, 32}};
|
||||
|
||||
const std::vector<std::vector<size_t >> kernels1D = {{1}, {3}};
|
||||
const std::vector<std::vector<size_t>> strides1D = {{1}};
|
||||
const std::vector<std::vector<ptrdiff_t>> padBegins1D = {{0}};
|
||||
const std::vector<std::vector<ptrdiff_t>> padEnds1D = {{0}};
|
||||
const std::vector<std::vector<size_t>> dilations1D = {{1}};
|
||||
|
||||
const auto groupConvBackpropData1DParams_ExplicitPadding = ::testing::Combine(
|
||||
::testing::ValuesIn(kernels1D),
|
||||
::testing::ValuesIn(strides1D),
|
||||
::testing::ValuesIn(padBegins1D),
|
||||
::testing::ValuesIn(padEnds1D),
|
||||
::testing::ValuesIn(dilations1D),
|
||||
::testing::ValuesIn(numOutChannels),
|
||||
::testing::ValuesIn(numGroups),
|
||||
::testing::Values(ngraph::op::PadType::EXPLICIT)
|
||||
);
|
||||
|
||||
const auto groupConvBackpropData1DParams_AutoPadValid = ::testing::Combine(
|
||||
::testing::ValuesIn(kernels1D),
|
||||
::testing::ValuesIn(strides1D),
|
||||
::testing::ValuesIn(padBegins1D),
|
||||
::testing::ValuesIn(padEnds1D),
|
||||
::testing::ValuesIn(dilations1D),
|
||||
::testing::ValuesIn(numOutChannels),
|
||||
::testing::ValuesIn(numGroups),
|
||||
::testing::Values(ngraph::op::PadType::VALID)
|
||||
);
|
||||
|
||||
INSTANTIATE_TEST_CASE_P(smoke_GroupConvBackpropData1D_ExplicitPadding, GroupConvBackpropDataLayerTest,
|
||||
::testing::Combine(
|
||||
groupConvBackpropData1DParams_ExplicitPadding,
|
||||
::testing::ValuesIn(netPrecisions),
|
||||
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
|
||||
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
|
||||
::testing::Values(InferenceEngine::Layout::ANY),
|
||||
::testing::Values(InferenceEngine::Layout::ANY),
|
||||
::testing::ValuesIn(inputShapes1D),
|
||||
::testing::Values(CommonTestUtils::DEVICE_CPU)),
|
||||
GroupConvBackpropDataLayerTest::getTestCaseName);
|
||||
|
||||
INSTANTIATE_TEST_CASE_P(smoke_GroupConvBackpropData1D_AutoPadValid, GroupConvBackpropDataLayerTest,
|
||||
::testing::Combine(
|
||||
groupConvBackpropData1DParams_AutoPadValid,
|
||||
::testing::ValuesIn(netPrecisions),
|
||||
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
|
||||
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
|
||||
::testing::Values(InferenceEngine::Layout::ANY),
|
||||
::testing::Values(InferenceEngine::Layout::ANY),
|
||||
::testing::ValuesIn(inputShapes1D),
|
||||
::testing::Values(CommonTestUtils::DEVICE_CPU)),
|
||||
GroupConvBackpropDataLayerTest::getTestCaseName);
|
||||
|
||||
/* ============= 2D GroupConvolution ============= */
|
||||
const std::vector<std::vector<size_t >> inputShapes2D = {{1, 16, 10, 10},
|
||||
{1, 32, 10, 10}};
|
||||
@@ -40,8 +96,8 @@ const auto groupConvBackpropData2DParams_ExplicitPadding = ::testing::Combine(
|
||||
const auto groupConvBackpropData2DParams_AutoPadValid = ::testing::Combine(
|
||||
::testing::ValuesIn(kernels2D),
|
||||
::testing::ValuesIn(strides2D),
|
||||
::testing::Values(std::vector<ptrdiff_t>({0, 0})),
|
||||
::testing::Values(std::vector<ptrdiff_t>({0, 0})),
|
||||
::testing::ValuesIn(padBegins2D),
|
||||
::testing::ValuesIn(padEnds2D),
|
||||
::testing::ValuesIn(dilations2D),
|
||||
::testing::ValuesIn(numOutChannels),
|
||||
::testing::ValuesIn(numGroups),
|
||||
@@ -94,8 +150,8 @@ const auto groupConvBackpropData3DParams_ExplicitPadding = ::testing::Combine(
|
||||
const auto groupConvBackpropData3DParams_AutoPadValid = ::testing::Combine(
|
||||
::testing::ValuesIn(kernels3D),
|
||||
::testing::ValuesIn(strides3D),
|
||||
::testing::Values(std::vector<ptrdiff_t>({0, 0, 0})),
|
||||
::testing::Values(std::vector<ptrdiff_t>({0, 0, 0})),
|
||||
::testing::ValuesIn(padBegins3D),
|
||||
::testing::ValuesIn(padEnds3D),
|
||||
::testing::ValuesIn(dilations3D),
|
||||
::testing::ValuesIn(numOutChannels),
|
||||
::testing::ValuesIn(numGroups),
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
// Copyright (C) 2020 Intel Corporation
|
||||
// Copyright (C) 2021 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
@@ -15,24 +15,24 @@
|
||||
|
||||
namespace LayerTestsDefinitions {
|
||||
|
||||
typedef std::tuple<
|
||||
InferenceEngine::SizeVector,
|
||||
InferenceEngine::SizeVector,
|
||||
std::vector<ptrdiff_t>,
|
||||
std::vector<ptrdiff_t>,
|
||||
InferenceEngine::SizeVector,
|
||||
size_t,
|
||||
size_t,
|
||||
ngraph::op::PadType> groupConvBackpropDataSpecificParams;
|
||||
typedef std::tuple<
|
||||
using groupConvBackpropDataSpecificParams = std::tuple<
|
||||
InferenceEngine::SizeVector, // kernels
|
||||
InferenceEngine::SizeVector, // strides
|
||||
std::vector<ptrdiff_t>, // pad begins
|
||||
std::vector<ptrdiff_t>, // pad ends
|
||||
InferenceEngine::SizeVector, // dilations
|
||||
size_t, // num output channels
|
||||
size_t, // num groups
|
||||
ngraph::op::PadType>; // padding type
|
||||
using groupConvBackpropDataLayerTestParamsSet = std::tuple<
|
||||
groupConvBackpropDataSpecificParams,
|
||||
InferenceEngine::Precision,
|
||||
InferenceEngine::Precision, // Input precision
|
||||
InferenceEngine::Precision, // Output precision
|
||||
InferenceEngine::Layout, // Input layout
|
||||
InferenceEngine::Layout, // Output layout
|
||||
InferenceEngine::SizeVector,
|
||||
LayerTestsUtils::TargetDevice> groupConvBackpropDataLayerTestParamsSet;
|
||||
InferenceEngine::Precision, // Network precision
|
||||
InferenceEngine::Precision, // Input precision
|
||||
InferenceEngine::Precision, // Output precision
|
||||
InferenceEngine::Layout, // Input layout
|
||||
InferenceEngine::Layout, // Output layout
|
||||
InferenceEngine::SizeVector, // Input shape
|
||||
LayerTestsUtils::TargetDevice>; // Device name
|
||||
|
||||
class GroupConvBackpropDataLayerTest : public testing::WithParamInterface<groupConvBackpropDataLayerTestParamsSet>,
|
||||
virtual public LayerTestsUtils::LayerTestsCommon {
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
// Copyright (C) 2020 Intel Corporation
|
||||
// Copyright (C) 2020-2021 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ std::shared_ptr<Node> makeGroupConvolutionBackpropData(const ngraph::Output<Node
|
||||
auto shape = in.get_shape();
|
||||
std::vector<size_t> filterWeightsShape = {shape[1], numOutChannels};
|
||||
if (filterWeightsShape[0] % numGroups || filterWeightsShape[1] % numGroups)
|
||||
throw std::runtime_error("incorrected shape for GroupConvolutionBackpropData");
|
||||
throw std::runtime_error("incorrect shape for GroupConvolutionBackpropData");
|
||||
filterWeightsShape[0] /= numGroups;
|
||||
filterWeightsShape[1] /= numGroups;
|
||||
filterWeightsShape.insert(filterWeightsShape.begin(), numGroups);
|
||||
|
||||
@@ -0,0 +1,99 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2021 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "ngraph/runtime/reference/convolution_backprop_data.hpp"
|
||||
#include "ngraph/util.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
template <typename INPUT,
|
||||
typename FILTER,
|
||||
typename OUTPUT,
|
||||
typename ACCU = typename widen<OUTPUT>::type>
|
||||
void group_convolution_backprop_data(const INPUT* in,
|
||||
const FILTER* f,
|
||||
OUTPUT* out,
|
||||
const Shape& in_shape,
|
||||
const Shape& filter_shape,
|
||||
const Shape& out_shape,
|
||||
const Strides& strides,
|
||||
const Strides& dilation,
|
||||
const CoordinateDiff& pads_begin,
|
||||
const CoordinateDiff& pads_end)
|
||||
|
||||
{
|
||||
const size_t group_count = filter_shape[filter_group_axis];
|
||||
|
||||
const INPUT* group_batch = in;
|
||||
const Shape group_batch_shape = [&]() {
|
||||
Shape new_shape{in_shape};
|
||||
new_shape[in_batch_axis] = 1;
|
||||
new_shape[in_channel_axis] /= group_count;
|
||||
return new_shape;
|
||||
}();
|
||||
const size_t group_batch_size = shape_size(group_batch_shape);
|
||||
|
||||
const FILTER* group_filter = f;
|
||||
const Shape group_filter_shape = [&]() {
|
||||
Shape new_shape{++filter_shape.begin(), filter_shape.end()};
|
||||
return new_shape;
|
||||
}();
|
||||
const size_t group_filter_size = shape_size(group_filter_shape);
|
||||
|
||||
OUTPUT* group_out = out;
|
||||
const Shape group_out_shape = [&]() {
|
||||
Shape new_shape{out_shape};
|
||||
new_shape[out_batch_axis] = 1;
|
||||
new_shape[out_channel_axis] /= group_count;
|
||||
return new_shape;
|
||||
}();
|
||||
const size_t group_out_size = shape_size(group_out_shape);
|
||||
|
||||
Strides in_dilation(in_shape.size(), 1);
|
||||
for (size_t batch_idx = 0; batch_idx < in_shape[in_batch_axis]; ++batch_idx)
|
||||
{
|
||||
group_filter = f;
|
||||
for (size_t group_idx = 0; group_idx < group_count; ++group_idx)
|
||||
{
|
||||
runtime::reference::convolution_backprop_in<INPUT, FILTER, OUTPUT, ACCU>(
|
||||
group_batch,
|
||||
group_filter,
|
||||
group_out,
|
||||
group_batch_shape,
|
||||
group_filter_shape,
|
||||
group_out_shape,
|
||||
in_dilation,
|
||||
dilation,
|
||||
pads_begin,
|
||||
pads_end,
|
||||
strides);
|
||||
group_batch += group_batch_size;
|
||||
group_filter += group_filter_size;
|
||||
group_out += group_out_size;
|
||||
}
|
||||
}
|
||||
}
|
||||
} // namespace reference
|
||||
|
||||
} // namespace runtime
|
||||
|
||||
} // namespace ngraph
|
||||
@@ -289,9 +289,27 @@ bool op::v1::GroupConvolutionBackpropData::is_dynamic() const
|
||||
return is_dynamic;
|
||||
}
|
||||
|
||||
static Dimension infer_group_from_input_shapes(const PartialShape& data_pshape,
|
||||
const PartialShape& filters_pshape)
|
||||
{
|
||||
Dimension group_dim = Dimension();
|
||||
if (data_pshape.rank().is_static() && data_pshape[1].is_static() &&
|
||||
filters_pshape.rank().is_static() && filters_pshape[1].is_static())
|
||||
{
|
||||
auto n_data_channels = data_pshape[1].get_length();
|
||||
auto input_channels = filters_pshape[1].get_length();
|
||||
|
||||
NGRAPH_CHECK((n_data_channels % input_channels) == 0);
|
||||
auto groups = n_data_channels / input_channels;
|
||||
group_dim = Dimension(groups);
|
||||
}
|
||||
return group_dim;
|
||||
}
|
||||
|
||||
const PartialShape op::v1::GroupConvolutionBackpropData::get_convolution_output_shape() const
|
||||
{
|
||||
auto data_pshape = get_input_partial_shape(0);
|
||||
auto filter_pshape = get_input_partial_shape(1);
|
||||
|
||||
PartialShape shape;
|
||||
if (data_pshape.rank().is_static())
|
||||
@@ -309,6 +327,14 @@ const PartialShape op::v1::GroupConvolutionBackpropData::get_convolution_output_
|
||||
{
|
||||
shape = const_op->get_shape_val();
|
||||
}
|
||||
else if (data_pshape.rank().is_static())
|
||||
{
|
||||
shape = PartialShape{vector<Dimension>(data_pshape.rank().get_length() - 2)};
|
||||
}
|
||||
else if (filter_pshape.rank().is_static())
|
||||
{
|
||||
shape = PartialShape{vector<Dimension>(data_pshape.rank().get_length() - 3)};
|
||||
}
|
||||
else
|
||||
{
|
||||
shape = PartialShape::dynamic();
|
||||
@@ -373,6 +399,38 @@ void op::v1::GroupConvolutionBackpropData::pre_validate_and_infer_types()
|
||||
filters_et,
|
||||
").");
|
||||
|
||||
NODE_VALIDATION_CHECK(
|
||||
this,
|
||||
(data_pshape.rank().compatible(5) && filters_pshape.rank().compatible(6)) ||
|
||||
(data_pshape.rank().compatible(4) && filters_pshape.rank().compatible(5)) ||
|
||||
(data_pshape.rank().compatible(3) && filters_pshape.rank().compatible(4)),
|
||||
"Shapes for data batch and filters do not match. (data batch shape: ",
|
||||
data_pshape,
|
||||
", filters shape: ",
|
||||
filters_pshape,
|
||||
").");
|
||||
|
||||
if (m_pads_begin.size() == 0)
|
||||
{
|
||||
m_pads_begin = conv_default_padding(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_pads_end.size() == 0)
|
||||
{
|
||||
m_pads_end = conv_default_padding(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_output_padding.size() == 0)
|
||||
{
|
||||
m_output_padding = conv_default_padding(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_strides.size() == 0)
|
||||
{
|
||||
m_strides = conv_default_strides(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_dilations.size() == 0)
|
||||
{
|
||||
m_dilations = conv_default_strides(this, data_pshape, filters_pshape);
|
||||
}
|
||||
|
||||
if (data_pshape.rank().is_static() && filters_pshape.rank().is_static())
|
||||
{
|
||||
if (filters_pshape[0].is_static() && filters_pshape[1].is_static() &&
|
||||
@@ -391,29 +449,13 @@ void op::v1::GroupConvolutionBackpropData::pre_validate_and_infer_types()
|
||||
"with number of input channels.");
|
||||
}
|
||||
|
||||
if (m_pads_begin.size() == 0)
|
||||
{
|
||||
m_pads_begin = conv_default_padding(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_pads_end.size() == 0)
|
||||
{
|
||||
m_pads_end = conv_default_padding(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_output_padding.size() == 0)
|
||||
{
|
||||
m_output_padding = conv_default_padding(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_strides.size() == 0)
|
||||
{
|
||||
m_strides = conv_default_strides(this, data_pshape, filters_pshape);
|
||||
}
|
||||
if (m_dilations.size() == 0)
|
||||
{
|
||||
m_dilations = conv_default_strides(this, data_pshape, filters_pshape);
|
||||
}
|
||||
|
||||
const auto num_spatial_dims = data_pshape.rank().get_length() - 2;
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
m_pads_begin.size() == num_spatial_dims &&
|
||||
m_pads_end.size() == num_spatial_dims,
|
||||
"Pads should be defined for all and only spatial features.");
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
m_strides.size() == num_spatial_dims,
|
||||
"Strides should be defined for all and only spatial features.");
|
||||
@@ -435,40 +477,78 @@ void op::v1::GroupConvolutionBackpropData::pre_validate_and_infer_types()
|
||||
// and infer them.
|
||||
if (is_output_shape_present)
|
||||
{
|
||||
const auto& output_shape_pshape = get_input_partial_shape(2);
|
||||
const element::Type output_shape_et = get_input_element_type(2);
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
output_shape_et.is_integral_number(),
|
||||
"Element type for output shape should be of integer type ",
|
||||
"(output_shape element type: ",
|
||||
output_shape_et,
|
||||
").");
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
output_shape_pshape.rank().compatible(1),
|
||||
"Spatial shape of output input must be of rank 1 ",
|
||||
"(output_shape shape: ",
|
||||
output_shape_pshape,
|
||||
").");
|
||||
|
||||
output_pshape = get_convolution_output_shape();
|
||||
|
||||
if (output_pshape.is_static() && data_pshape.is_static() && filters_pshape.is_static())
|
||||
if (output_pshape.rank().is_static())
|
||||
{
|
||||
Shape output_shape = output_pshape.to_shape();
|
||||
const Shape& data_shape = data_pshape.to_shape();
|
||||
const Shape& filters_shape = filters_pshape.to_shape();
|
||||
const size_t num_spatial_dims = data_shape.size() - 2;
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
output_shape.size() == num_spatial_dims,
|
||||
"Output shape should be specified only and for "
|
||||
"all spatial dimensions.");
|
||||
|
||||
// If auto_pad has one of following mode we infer paddings. Otherwise in
|
||||
// EXPLICIT auto_pad mode we use what is provided.
|
||||
if (m_auto_pad == PadType::SAME_UPPER || m_auto_pad == PadType::SAME_LOWER)
|
||||
vector<Dimension> tmp_output_shape{output_pshape};
|
||||
if (data_pshape.rank().is_static() && filters_pshape.rank().is_static())
|
||||
{
|
||||
opset1::infer_conv_backprop_auto_padding(
|
||||
Shape{std::next(data_shape.begin(), 2), std::end(data_shape)},
|
||||
Shape{std::next(filters_shape.begin(), 3), std::end(filters_shape)},
|
||||
output_shape,
|
||||
m_strides,
|
||||
m_dilations,
|
||||
m_auto_pad,
|
||||
m_output_padding,
|
||||
m_pads_begin,
|
||||
m_pads_end);
|
||||
}
|
||||
const size_t num_spatial_dims = data_pshape.rank().get_length() - 2;
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
output_pshape.rank().get_length() == num_spatial_dims,
|
||||
"Output shape should be specified only and for "
|
||||
"all spatial dimensions.");
|
||||
|
||||
// GROUP * C_OUTPUT
|
||||
output_shape.insert(output_shape.begin(), filters_shape.at(0) * filters_shape.at(2));
|
||||
// N
|
||||
output_shape.insert(output_shape.begin(), data_shape.at(0));
|
||||
output_pshape = output_shape;
|
||||
// If auto_pad has one of following mode we infer paddings. Otherwise in
|
||||
// EXPLICIT auto_pad mode we use what is provided.
|
||||
if ((output_pshape.is_static() && data_pshape.is_static() &&
|
||||
filters_pshape.is_static()) &&
|
||||
(m_auto_pad == PadType::SAME_UPPER || m_auto_pad == PadType::SAME_LOWER))
|
||||
{
|
||||
const Shape& data_shape = data_pshape.to_shape();
|
||||
const Shape& filters_shape = filters_pshape.to_shape();
|
||||
|
||||
opset1::infer_conv_backprop_auto_padding(
|
||||
Shape{std::next(data_shape.begin(), 2), std::end(data_shape)},
|
||||
Shape{std::next(filters_shape.begin(), 3), std::end(filters_shape)},
|
||||
output_pshape.to_shape(),
|
||||
m_strides,
|
||||
m_dilations,
|
||||
m_auto_pad,
|
||||
m_output_padding,
|
||||
m_pads_begin,
|
||||
m_pads_end);
|
||||
}
|
||||
|
||||
// GROUP * C_OUTPUT
|
||||
auto group_dim = filters_pshape[0];
|
||||
if (!group_dim.is_static())
|
||||
{
|
||||
group_dim = infer_group_from_input_shapes(data_pshape, filters_pshape);
|
||||
}
|
||||
tmp_output_shape.insert(tmp_output_shape.begin(), group_dim * filters_pshape[2]);
|
||||
// N
|
||||
tmp_output_shape.insert(tmp_output_shape.begin(), data_pshape[0]);
|
||||
}
|
||||
else
|
||||
{
|
||||
auto n_out_channels = filters_pshape.rank().is_static()
|
||||
? filters_pshape[0] * filters_pshape[2]
|
||||
: Dimension::dynamic();
|
||||
auto batches =
|
||||
data_pshape.rank().is_static() ? data_pshape[0] : Dimension::dynamic();
|
||||
tmp_output_shape.insert(tmp_output_shape.begin(), n_out_channels);
|
||||
tmp_output_shape.insert(tmp_output_shape.begin(), batches);
|
||||
}
|
||||
output_pshape = tmp_output_shape;
|
||||
}
|
||||
set_input_is_relevant_to_shape(2);
|
||||
}
|
||||
@@ -483,7 +563,7 @@ void op::v1::GroupConvolutionBackpropData::pre_validate_and_infer_types()
|
||||
m_pads_end.assign(m_pads_end.size(), 0);
|
||||
}
|
||||
|
||||
if (data_pshape.rank().is_static() && filters_pshape.is_static())
|
||||
if (data_pshape.rank().is_static() && filters_pshape.rank().is_static())
|
||||
{
|
||||
vector<Dimension> data_shape{data_pshape}, filters_shape{filters_pshape}, output_shape;
|
||||
|
||||
@@ -498,14 +578,32 @@ void op::v1::GroupConvolutionBackpropData::pre_validate_and_infer_types()
|
||||
output_shape);
|
||||
|
||||
// GROUP * C_OUTPUT
|
||||
output_shape.insert(output_shape.begin(), filters_shape.at(0) * filters_shape.at(2));
|
||||
auto group_dim = filters_pshape[0];
|
||||
if (!group_dim.is_static())
|
||||
{
|
||||
group_dim = infer_group_from_input_shapes(data_pshape, filters_pshape);
|
||||
}
|
||||
output_shape.insert(output_shape.begin(), group_dim * filters_shape.at(2));
|
||||
// N
|
||||
output_shape.insert(output_shape.begin(), data_shape.at(0));
|
||||
output_pshape = PartialShape{output_shape};
|
||||
}
|
||||
else
|
||||
{
|
||||
output_pshape = PartialShape::dynamic(data_pshape.rank());
|
||||
if (data_pshape.rank().is_static())
|
||||
{
|
||||
output_pshape = PartialShape::dynamic(data_pshape.rank());
|
||||
output_pshape[0] = data_pshape[0];
|
||||
}
|
||||
else if (filters_pshape.rank().is_static())
|
||||
{
|
||||
output_pshape = PartialShape::dynamic(filters_pshape.rank().get_length() - 1);
|
||||
output_pshape[1] = filters_pshape[0] * filters_pshape[2];
|
||||
}
|
||||
else
|
||||
{
|
||||
output_pshape = PartialShape::dynamic();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -289,6 +289,7 @@ set(MULTI_TEST_SRC
|
||||
backend/gather_nd.in.cpp
|
||||
backend/gelu.in.cpp
|
||||
backend/group_convolution.in.cpp
|
||||
backend/group_convolution_backprop_data.in.cpp
|
||||
backend/hard_sigmoid.in.cpp
|
||||
backend/interpolate.in.cpp
|
||||
backend/log.in.cpp
|
||||
|
||||
@@ -1013,7 +1013,7 @@ TEST(attributes, group_conv_backprop_data_op)
|
||||
NodeBuilder::get_ops().register_factory<opset1::GroupConvolutionBackpropData>();
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 20, 224, 224});
|
||||
const auto filter = make_shared<op::Parameter>(element::f32, Shape{4, 5, 2, 3, 3});
|
||||
const auto output_shape = make_shared<op::Parameter>(element::f32, Shape{1, 8, 447, 447});
|
||||
const auto output_shape = make_shared<op::Parameter>(element::i32, Shape{1});
|
||||
|
||||
const auto strides = Strides{2, 1};
|
||||
const auto pads_begin = CoordinateDiff{3, 4};
|
||||
|
||||
@@ -158,139 +158,3 @@ NGRAPH_TEST(${BACKEND_NAME}, group_convolution_1D_2group_2batch_2channel)
|
||||
strides, padding, dilations);
|
||||
}
|
||||
// // clang-format on
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, dyn_group_convolution_backprop_data)
|
||||
{
|
||||
Shape shape_filter{6, 1, 3, 3};
|
||||
auto filters = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
Shape shape_delta{2, 6, 3, 3};
|
||||
auto deltas = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
Shape shape_data_batch{2, 3, 5, 5};
|
||||
auto data_batch = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
auto strides = Strides{1, 1};
|
||||
auto dilations = Strides{1, 1};
|
||||
auto padding_begin = CoordinateDiff{0, 0};
|
||||
auto padding_end = CoordinateDiff{0, 0};
|
||||
|
||||
auto conv_bprop_data = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data_batch, filters, deltas, strides, padding_begin, padding_end, dilations);
|
||||
|
||||
auto f = make_shared<Function>(conv_bprop_data, ParameterVector{data_batch, filters, deltas});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}", true);
|
||||
|
||||
auto handle = backend->compile(f);
|
||||
|
||||
auto result = backend->create_dynamic_tensor(element::f32, PartialShape::dynamic());
|
||||
|
||||
vector<float> filter, delta, data, expected_result;
|
||||
|
||||
for (int i = 0; i < 6 * 1 * 3 * 3; i++)
|
||||
filter.emplace_back(i);
|
||||
|
||||
for (int i = 0; i < 2 * 6 * 3 * 3; i++)
|
||||
delta.emplace_back(i);
|
||||
|
||||
for (int i = 0; i < 2 * 3 * 5 * 5; i++)
|
||||
data.emplace_back(i);
|
||||
|
||||
for (int i = 0; i < 2 * 3 * 5 * 5; i++)
|
||||
expected_result.emplace_back(i);
|
||||
|
||||
auto a = backend->create_tensor(element::f32, shape_data_batch);
|
||||
copy_data(a, data);
|
||||
auto b = backend->create_tensor(element::f32, shape_filter);
|
||||
copy_data(b, filter);
|
||||
auto c = backend->create_tensor(element::f32, shape_delta);
|
||||
copy_data(c, delta);
|
||||
handle->call_with_validate({result}, {a, b, c});
|
||||
EXPECT_FALSE(test::all_close_f(vector<float>{expected_result}, read_vector<float>(result)));
|
||||
}
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, v1_group_conv_backprop_data)
|
||||
{
|
||||
const CoordinateDiff output_padding{1, 1};
|
||||
const CoordinateDiff pads_begin{1, 1};
|
||||
const CoordinateDiff pads_end{1, 1};
|
||||
Strides strides{2, 2};
|
||||
Strides dilations{1, 1};
|
||||
const op::PadType auto_pad{op::PadType::EXPLICIT};
|
||||
|
||||
auto data = make_shared<op::Parameter>(element::f32, Shape{1, 1, 3, 3});
|
||||
auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 1, 1, 3, 3});
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, pads_begin, pads_end, dilations, auto_pad, output_padding);
|
||||
|
||||
auto function = make_shared<Function>(NodeVector{gcbd}, ParameterVector{data, filters});
|
||||
auto test_case = test::TestCase<TestEngine>(function);
|
||||
|
||||
// X
|
||||
test_case.add_input<float>(vector<float>{0.16857791f,
|
||||
-0.15161794f,
|
||||
0.08540368f,
|
||||
0.1820628f,
|
||||
-0.21746576f,
|
||||
0.08245695f,
|
||||
0.1431433f,
|
||||
-0.43156421f,
|
||||
0.30591947f});
|
||||
// W
|
||||
test_case.add_input<float>({-0.06230065f,
|
||||
0.37932432f,
|
||||
-0.25388849f,
|
||||
0.33878803f,
|
||||
0.43709868f,
|
||||
-0.22477469f,
|
||||
0.04118127f,
|
||||
-0.44696793f,
|
||||
0.06373066f});
|
||||
test_case.add_expected_output(
|
||||
Shape{1, 1, 6, 6},
|
||||
vector<float>{
|
||||
0.07368518f, -0.08925839f, -0.06627201f, 0.06301362f, 0.03732984f, -0.01919658f,
|
||||
-0.00628807f, -0.02817563f, -0.01472169f, 0.04392925f, -0.00689478f, -0.01549204f,
|
||||
0.07957941f, -0.11459791f, -0.09505399f, 0.07681622f, 0.03604182f, -0.01853423f,
|
||||
-0.0270785f, -0.00680824f, -0.06650258f, 0.08004665f, 0.07918708f, -0.0724144f,
|
||||
0.06256775f, -0.17838378f, -0.18863615f, 0.20064656f, 0.133717f, -0.06876295f,
|
||||
-0.06398046f, -0.00864975f, 0.19289537f, -0.01490572f, -0.13673618f, 0.01949645f});
|
||||
test_case.run(DEFAULT_FLOAT_TOLERANCE_BITS + 1);
|
||||
}
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, v1_group_conv_backprop_data_output_shape)
|
||||
{
|
||||
Strides strides{1, 1};
|
||||
Strides dilations{1, 1};
|
||||
|
||||
auto data = make_shared<op::Parameter>(element::f32, Shape{1, 1, 1, 10});
|
||||
auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 1, 1, 1, 5});
|
||||
auto output_shape = op::Constant::create(element::i64, Shape{2}, {1, 14});
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, strides, dilations, op::PadType::SAME_UPPER);
|
||||
|
||||
auto function = make_shared<Function>(NodeVector{gcbd}, ParameterVector{data, filters});
|
||||
auto test_case = test::TestCase<TestEngine>(function);
|
||||
|
||||
// X
|
||||
test_case.add_input<float>(
|
||||
vector<float>{0.0f, 1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f});
|
||||
// W
|
||||
test_case.add_input<float>({1.0f, 2.0f, 3.0f, 2.0f, 1.0f});
|
||||
test_case.add_expected_output(Shape{1, 1, 1, 14},
|
||||
vector<float>{0.0f,
|
||||
1.0f,
|
||||
4.0f,
|
||||
10.0f,
|
||||
18.0f,
|
||||
27.0f,
|
||||
36.0f,
|
||||
45.0f,
|
||||
54.0f,
|
||||
63.0f,
|
||||
62.0f,
|
||||
50.0f,
|
||||
26.0f,
|
||||
9.0f});
|
||||
test_case.run();
|
||||
}
|
||||
|
||||
265
ngraph/test/backend/group_convolution_backprop_data.in.cpp
Normal file
265
ngraph/test/backend/group_convolution_backprop_data.in.cpp
Normal file
@@ -0,0 +1,265 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2021 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include "gtest/gtest.h"
|
||||
#include "ngraph/ngraph.hpp"
|
||||
#include "util/engine/test_engines.hpp"
|
||||
#include "util/test_case.hpp"
|
||||
#include "util/test_control.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
static string s_manifest = "${MANIFEST}";
|
||||
using TestEngine = test::ENGINE_CLASS_NAME(${BACKEND_NAME});
|
||||
|
||||
static void GroupConvolutionBackPropDataTest(const std::vector<float>& inputs,
|
||||
const Shape inputs_shape,
|
||||
const std::vector<float>& filters,
|
||||
const Shape filter_shape,
|
||||
const std::vector<float>& outputs,
|
||||
const Shape outputs_shape,
|
||||
const Strides& strides,
|
||||
const CoordinateDiff& padding,
|
||||
const Strides& dilations)
|
||||
{
|
||||
CoordinateDiff pads_begin{padding};
|
||||
CoordinateDiff pads_end{padding};
|
||||
const op::PadType auto_pad{op::PadType::EXPLICIT};
|
||||
|
||||
auto inputs_param = make_shared<op::Parameter>(element::f32, inputs_shape);
|
||||
auto filter_param = make_shared<op::Parameter>(element::f32, filter_shape);
|
||||
auto conv_backprop_data = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
inputs_param, filter_param, strides, pads_begin, pads_end, dilations, auto_pad);
|
||||
auto f = make_shared<Function>(conv_backprop_data, ParameterVector{inputs_param, filter_param});
|
||||
|
||||
auto test_case = test::TestCase<TestEngine>(f);
|
||||
test_case.add_input<float>(inputs_shape, inputs);
|
||||
test_case.add_input<float>(filter_shape, filters);
|
||||
test_case.add_expected_output<float>(outputs_shape, outputs);
|
||||
test_case.run();
|
||||
}
|
||||
|
||||
// --------------------- 1D group convolution ------------------------------------------
|
||||
// clang-format off
|
||||
NGRAPH_TEST(${BACKEND_NAME}, group_convolution_backprop_data_1D_1group_1batch_1channel)
|
||||
{
|
||||
const Strides strides{1};
|
||||
const CoordinateDiff padding{0};
|
||||
const Strides dilations{1};
|
||||
|
||||
const Shape inputs_shape{1, 1, 4};
|
||||
const std::vector<float> inputs{1.0f, 3.0f, 3.0f, 0.0f};
|
||||
|
||||
const Shape filter_shape{1, 1, 1, 3};
|
||||
const std::vector<float> filters{2.0f, 0.0f, 1.0f};
|
||||
|
||||
const Shape outputs_shape{1, 1, 6};
|
||||
const std::vector<float> outputs{2.0f, 6.0f, 7.0f, 3.0f, 3.0f, 0.0f};
|
||||
|
||||
GroupConvolutionBackPropDataTest(inputs,
|
||||
inputs_shape,
|
||||
filters,
|
||||
filter_shape,
|
||||
outputs,
|
||||
outputs_shape,
|
||||
strides,
|
||||
padding,
|
||||
dilations);
|
||||
}
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, group_convolution_backprop_data_1D_2group_1batch_2channel)
|
||||
{
|
||||
const Strides strides{1};
|
||||
const CoordinateDiff padding{0};
|
||||
const Strides dilations{1};
|
||||
|
||||
const Shape inputs_shape{1, 2, 4};
|
||||
const std::vector<float> inputs{1.0f, 3.0f, 3.0f, 0.0f,
|
||||
1.0f, 2.0f, 1.0f, 3.0f};
|
||||
|
||||
const Shape filter_shape{2, 1, 1, 3};
|
||||
const std::vector<float> filters{1.0f, 0.0f, 3.0f, 3.0f, 0.0f, 1.0f};
|
||||
|
||||
const Shape outputs_shape{1, 2, 6};
|
||||
const std::vector<float> outputs{
|
||||
1.0f, 3.0f, 6.0f, 9.0f, 9.0f, 0.0f, 3.0f, 6.0f, 4.0f, 11.0f, 1.0f, 3.0f};
|
||||
|
||||
GroupConvolutionBackPropDataTest(inputs,
|
||||
inputs_shape,
|
||||
filters,
|
||||
filter_shape,
|
||||
outputs,
|
||||
outputs_shape,
|
||||
strides,
|
||||
padding,
|
||||
dilations);
|
||||
}
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, group_convolution_backprop_data_1D_2group_1batch_2_filters_2channel)
|
||||
{
|
||||
const Strides strides{1};
|
||||
const CoordinateDiff padding{0};
|
||||
const Strides dilations{1};
|
||||
|
||||
const Shape inputs_shape{1, 4, 4};
|
||||
const std::vector<float> inputs{1.0f, 3.0f, 3.0f, 0.0f,
|
||||
1.0f, 2.0f, -1.0f, -3.0f,
|
||||
-3.0f, 0.0f, 1.0f, 2.0f,
|
||||
0.0f, -2.0f, 3.0f, -1.0f};
|
||||
|
||||
const Shape filter_shape{2, 2, 1, 3};
|
||||
const std::vector<float> filters{
|
||||
1.0f, 0.0f, 3.0f, 3.0f, 0.0f, 1.0f, -3.0f, 0.0f, 1.0f, 3.0f, 2.0f, -1.0f};
|
||||
|
||||
const Shape outputs_shape{1, 2, 6};
|
||||
const std::vector<float> outputs{
|
||||
4.0f, 9.0f, 4.0f, 2.0f, 8.0f, -3.0f, 9.0f, -6.0f, -1.0f, -1.0f, -4.0f, 3.0f};
|
||||
|
||||
GroupConvolutionBackPropDataTest(inputs,
|
||||
inputs_shape,
|
||||
filters,
|
||||
filter_shape,
|
||||
outputs,
|
||||
outputs_shape,
|
||||
strides,
|
||||
padding,
|
||||
dilations);
|
||||
}
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, group_convolution_backprop_data_1D_2group_2batch_2channel)
|
||||
{
|
||||
const Strides strides{1};
|
||||
const CoordinateDiff padding{0};
|
||||
const Strides dilations{1};
|
||||
|
||||
const Shape inputs_shape{2, 2, 4};
|
||||
const std::vector<float> inputs{// -- batch 1 --
|
||||
1.0f, 3.0f, 0.0f, 1.0f,
|
||||
1.0f, 3.0f, 0.0f, 2.0f,
|
||||
// -- batch 2 --
|
||||
1.0f, 3.0f, 0.0f, 1.0f,
|
||||
1.0f, 3.0f, 0.0f, 2.0f};
|
||||
|
||||
const Shape filter_shape{2, 1, 1, 3};
|
||||
const std::vector<float> filters{1.0f, 0.0f, 3.0f, 3.0f, 0.0f, 1.0f};
|
||||
|
||||
const Shape outputs_shape{2, 2, 6};
|
||||
const std::vector<float> outputs{1.0f, 3.0f, 3.0f, 10.0f, 0.0f, 3.0f,
|
||||
3.0f, 9.0f, 1.0f, 9.0f, 0.0f, 2.0f,
|
||||
1.0f, 3.0f, 3.0f, 10.0f, 0.0f, 3.0f,
|
||||
3.0f, 9.0f, 1.0f, 9.0f, 0.0f, 2.0f};
|
||||
|
||||
GroupConvolutionBackPropDataTest(inputs,
|
||||
inputs_shape,
|
||||
filters,
|
||||
filter_shape,
|
||||
outputs,
|
||||
outputs_shape,
|
||||
strides,
|
||||
padding,
|
||||
dilations);
|
||||
}
|
||||
// clang-format on
|
||||
|
||||
// --------------------- 2D group convolution ------------------------------------------
|
||||
NGRAPH_TEST(${BACKEND_NAME}, group_convolution_backprop_data_2D)
|
||||
{
|
||||
const CoordinateDiff output_padding{1, 1};
|
||||
const CoordinateDiff pads_begin{1, 1};
|
||||
const CoordinateDiff pads_end{1, 1};
|
||||
Strides strides{2, 2};
|
||||
Strides dilations{1, 1};
|
||||
const op::PadType auto_pad{op::PadType::EXPLICIT};
|
||||
|
||||
auto data = make_shared<op::Parameter>(element::f32, Shape{1, 1, 3, 3});
|
||||
auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 1, 1, 3, 3});
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, pads_begin, pads_end, dilations, auto_pad, output_padding);
|
||||
|
||||
auto function = make_shared<Function>(NodeVector{gcbd}, ParameterVector{data, filters});
|
||||
auto test_case = test::TestCase<TestEngine>(function);
|
||||
|
||||
// X
|
||||
test_case.add_input<float>(vector<float>{0.16857791f,
|
||||
-0.15161794f,
|
||||
0.08540368f,
|
||||
0.1820628f,
|
||||
-0.21746576f,
|
||||
0.08245695f,
|
||||
0.1431433f,
|
||||
-0.43156421f,
|
||||
0.30591947f});
|
||||
// W
|
||||
test_case.add_input<float>({-0.06230065f,
|
||||
0.37932432f,
|
||||
-0.25388849f,
|
||||
0.33878803f,
|
||||
0.43709868f,
|
||||
-0.22477469f,
|
||||
0.04118127f,
|
||||
-0.44696793f,
|
||||
0.06373066f});
|
||||
test_case.add_expected_output(
|
||||
Shape{1, 1, 6, 6},
|
||||
vector<float>{
|
||||
0.07368518f, -0.08925839f, -0.06627201f, 0.06301362f, 0.03732984f, -0.01919658f,
|
||||
-0.00628807f, -0.02817563f, -0.01472169f, 0.04392925f, -0.00689478f, -0.01549204f,
|
||||
0.07957941f, -0.11459791f, -0.09505399f, 0.07681622f, 0.03604182f, -0.01853423f,
|
||||
-0.0270785f, -0.00680824f, -0.06650258f, 0.08004665f, 0.07918708f, -0.0724144f,
|
||||
0.06256775f, -0.17838378f, -0.18863615f, 0.20064656f, 0.133717f, -0.06876295f,
|
||||
-0.06398046f, -0.00864975f, 0.19289537f, -0.01490572f, -0.13673618f, 0.01949645f});
|
||||
test_case.run(DEFAULT_FLOAT_TOLERANCE_BITS + 1);
|
||||
}
|
||||
|
||||
NGRAPH_TEST(${BACKEND_NAME}, group_convolution_backprop_data_2D_output_shape)
|
||||
{
|
||||
Strides strides{1, 1};
|
||||
Strides dilations{1, 1};
|
||||
|
||||
auto data = make_shared<op::Parameter>(element::f32, Shape{1, 1, 1, 10});
|
||||
auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 1, 1, 1, 5});
|
||||
auto output_shape = op::Constant::create(element::i64, Shape{2}, {1, 14});
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, strides, dilations, op::PadType::SAME_UPPER);
|
||||
|
||||
auto function = make_shared<Function>(NodeVector{gcbd}, ParameterVector{data, filters});
|
||||
auto test_case = test::TestCase<TestEngine>(function);
|
||||
|
||||
// X
|
||||
test_case.add_input<float>(
|
||||
vector<float>{0.0f, 1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f, 9.0f});
|
||||
// W
|
||||
test_case.add_input<float>({1.0f, 2.0f, 3.0f, 2.0f, 1.0f});
|
||||
test_case.add_expected_output(Shape{1, 1, 1, 14},
|
||||
vector<float>{0.0f,
|
||||
1.0f,
|
||||
4.0f,
|
||||
10.0f,
|
||||
18.0f,
|
||||
27.0f,
|
||||
36.0f,
|
||||
45.0f,
|
||||
54.0f,
|
||||
63.0f,
|
||||
62.0f,
|
||||
50.0f,
|
||||
26.0f,
|
||||
9.0f});
|
||||
test_case.run();
|
||||
}
|
||||
@@ -44,6 +44,7 @@
|
||||
#include <ngraph/runtime/reference/gelu.hpp>
|
||||
#include <ngraph/runtime/reference/grn.hpp>
|
||||
#include <ngraph/runtime/reference/group_convolution.hpp>
|
||||
#include <ngraph/runtime/reference/group_convolution_backprop_data.hpp>
|
||||
#include <ngraph/runtime/reference/gru_cell.hpp>
|
||||
#include <ngraph/runtime/reference/hard_sigmoid.hpp>
|
||||
#include <ngraph/runtime/reference/log_softmax.hpp>
|
||||
@@ -265,6 +266,32 @@ namespace
|
||||
op->get_pads_end());
|
||||
return true;
|
||||
}
|
||||
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const shared_ptr<op::v1::GroupConvolutionBackpropData>& op,
|
||||
const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs)
|
||||
{
|
||||
const auto in_data_ptr = inputs[0]->get_data_ptr<ET>();
|
||||
const auto filter_data_ptr = inputs[1]->get_data_ptr<ET>();
|
||||
const auto out_data_ptr = outputs[0]->get_data_ptr<ET>();
|
||||
const auto in_shape = inputs[0]->get_shape();
|
||||
const auto filter_shape = inputs[1]->get_shape();
|
||||
const auto out_shape = outputs[0]->get_shape();
|
||||
runtime::reference::group_convolution_backprop_data<
|
||||
typename element_type_traits<ET>::value_type>(in_data_ptr,
|
||||
filter_data_ptr,
|
||||
out_data_ptr,
|
||||
in_shape,
|
||||
filter_shape,
|
||||
out_shape,
|
||||
op->get_strides(),
|
||||
op->get_dilations(),
|
||||
op->get_pads_begin(),
|
||||
op->get_pads_end());
|
||||
return true;
|
||||
}
|
||||
|
||||
namespace cum_sum_v0
|
||||
{
|
||||
template <element::Type_t t1, element::Type_t t2>
|
||||
|
||||
@@ -35,44 +35,6 @@ runtime::interpreter::INTExecutable::INTExecutable(const shared_ptr<Function>& f
|
||||
, m_performance_counters_enabled{enable_performance_collection}
|
||||
{
|
||||
m_function = clone_function(*function);
|
||||
for (const auto& node : m_function->get_ordered_ops())
|
||||
{
|
||||
// TODO: WA because of references mismatch for the operation
|
||||
if (is_type<op::v1::GroupConvolutionBackpropData>(node))
|
||||
{
|
||||
auto gr_conv_bp_data = dynamic_pointer_cast<op::v1::GroupConvolutionBackpropData>(node);
|
||||
auto num_groups = gr_conv_bp_data->input_value(1).get_shape()[0];
|
||||
auto split_filter_axis = std::make_shared<op::Constant>(
|
||||
ngraph::element::Type_t::i64, ngraph::Shape{}, std::vector<uint64_t>{0});
|
||||
auto sliced_filter = std::make_shared<op::v1::Split>(
|
||||
gr_conv_bp_data->input_value(1), split_filter_axis, num_groups);
|
||||
auto split_data_axis = std::make_shared<op::Constant>(
|
||||
ngraph::element::Type_t::i64, ngraph::Shape{}, std::vector<uint64_t>{1});
|
||||
auto sliced_data = std::make_shared<op::v1::Split>(
|
||||
gr_conv_bp_data->input_value(0), split_data_axis, num_groups);
|
||||
|
||||
NodeVector convs;
|
||||
auto squeeze_filter_axis = std::make_shared<op::Constant>(
|
||||
ngraph::element::Type_t::i64, ngraph::Shape{}, std::vector<uint64_t>{0});
|
||||
for (size_t i = 0; i < num_groups; ++i)
|
||||
{
|
||||
auto squeezed_filter = std::make_shared<op::v0::Squeeze>(sliced_filter->output(i),
|
||||
squeeze_filter_axis);
|
||||
auto conv = std::make_shared<op::v1::ConvolutionBackpropData>(
|
||||
sliced_data->output(i),
|
||||
squeezed_filter,
|
||||
gr_conv_bp_data->get_strides(),
|
||||
gr_conv_bp_data->get_pads_begin(),
|
||||
gr_conv_bp_data->get_pads_end(),
|
||||
gr_conv_bp_data->get_dilations(),
|
||||
gr_conv_bp_data->get_auto_pad(),
|
||||
gr_conv_bp_data->get_output_padding());
|
||||
convs.push_back(conv);
|
||||
}
|
||||
auto concat = std::make_shared<op::Concat>(convs, 1);
|
||||
replace_node(node, concat);
|
||||
}
|
||||
}
|
||||
for (auto node : m_function->get_ordered_ops())
|
||||
{
|
||||
m_nodes.push_back(node);
|
||||
|
||||
@@ -53,6 +53,7 @@ NGRAPH_OP(ConvertLike, op::v1)
|
||||
NGRAPH_OP(Convolution, ngraph::op::v1)
|
||||
NGRAPH_OP(ConvolutionBackpropData, ngraph::op::v1)
|
||||
NGRAPH_OP(GroupConvolution, ngraph::op::v1)
|
||||
NGRAPH_OP(GroupConvolutionBackpropData, ngraph::op::v1)
|
||||
NGRAPH_OP(LessEqual, op::v1)
|
||||
NGRAPH_OP(LogicalAnd, op::v1)
|
||||
NGRAPH_OP(LogicalOr, op::v1)
|
||||
|
||||
@@ -23,12 +23,12 @@ using namespace ngraph;
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data)
|
||||
{
|
||||
// GROUPS x C_IN x C_OUT x kH x kW
|
||||
const auto weights = make_shared<op::Parameter>(element::f32, Shape{2, 8, 2, 3, 3});
|
||||
// N x C_IN * GROUPS x H x W
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{2, 8, 2, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 6, 6});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, weights, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
EXPECT_EQ(gcbd->get_element_type(), element::f32);
|
||||
EXPECT_EQ(gcbd->get_output_shape(0), (Shape{1, 4, 8, 8}));
|
||||
EXPECT_EQ(gcbd->get_strides(), (Strides{1, 1}));
|
||||
@@ -39,16 +39,15 @@ TEST(type_prop, group_conv_backprop_data)
|
||||
EXPECT_EQ(gcbd->get_auto_pad(), op::PadType::EXPLICIT);
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_output_shape)
|
||||
TEST(type_prop, group_conv_backprop_data_output_shape_as_const)
|
||||
{
|
||||
// N x C_IN * GROUPS x H x W
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
// GROUPS x C_IN x C_OUT x kH x kW
|
||||
const auto weights = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, weights, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
EXPECT_EQ(gcbd->get_element_type(), element::f32);
|
||||
EXPECT_EQ(gcbd->get_output_shape(0), (Shape{1, 2, 3, 3}));
|
||||
EXPECT_EQ(gcbd->get_strides(), (Strides{1, 1}));
|
||||
@@ -59,64 +58,476 @@ TEST(type_prop, group_conv_backprop_data_output_shape)
|
||||
EXPECT_EQ(gcbd->get_auto_pad(), op::PadType::SAME_UPPER);
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_bprop_data_v1_output_partial_shape_dynamic_static_rank)
|
||||
TEST(type_prop, group_conv_backprop_data_output_shape_as_param)
|
||||
{
|
||||
PartialShape shape_filter{4, 5, 2, 3, 3};
|
||||
auto filters = make_shared<op::Parameter>(element::f32, shape_filter);
|
||||
PartialShape shape_data{Dimension(), 20, 224, 224};
|
||||
auto data = make_shared<op::Parameter>(element::f32, shape_data);
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
const auto output_shape = make_shared<op::Parameter>(element::i64, Shape{2});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
EXPECT_EQ(gcbd->get_element_type(), element::f32);
|
||||
EXPECT_EQ(gcbd->get_auto_pad(), op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{1, 2, Dimension::dynamic(), Dimension::dynamic()}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_static_ranks_shape_inference_1)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{Dimension::dynamic(), Dimension::dynamic(), 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(
|
||||
gcbd->get_output_partial_shape(0).same_scheme(PartialShape{Dimension::dynamic(), 2, 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_static_ranks_shape_inference_2)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 16, 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(
|
||||
gcbd->get_output_partial_shape(0).same_scheme(PartialShape{Dimension::dynamic(), 2, 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_static_ranks_shape_inference_3)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 16, 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{Dimension::dynamic(), Dimension::dynamic(), 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{Dimension::dynamic(), Dimension::dynamic(), 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_static_ranks_shape_inference_4)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{1, Dimension::dynamic(), 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(
|
||||
gcbd->get_output_partial_shape(0).same_scheme(PartialShape{1, Dimension::dynamic(), 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_static_ranks_shape_inference_5)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 16, 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{Dimension::dynamic(), 16, Dimension::dynamic(), 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{Dimension::dynamic(), Dimension::dynamic(), 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_dyn_static_ranks_shape_inference_1)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
auto data = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{Dimension::dynamic(), Dimension::dynamic(), 224, 224});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
auto filters = make_shared<op::Parameter>(element::f32, PartialShape{4, 5, 2, 3, 3});
|
||||
auto strides = Strides{2, 2};
|
||||
auto dilations = Strides{1, 1};
|
||||
auto padding_begin = CoordinateDiff{1, 1};
|
||||
auto padding_end = CoordinateDiff{1, 1};
|
||||
|
||||
auto conv1 = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, padding_begin, padding_end, dilations);
|
||||
|
||||
ASSERT_TRUE(conv1->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(conv1->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(conv1->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(conv1->get_output_partial_shape(0).same_scheme(
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{Dimension::dynamic(), 8, 447, 447}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_dyn_static_ranks_shape_inference_2)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
auto data = make_shared<op::Parameter>(element::f32, PartialShape{1, 20, 224, 224});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
auto filters =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 5, 2, 3, 3});
|
||||
auto strides = Strides{2, 2};
|
||||
auto dilations = Strides{1, 1};
|
||||
auto padding_begin = CoordinateDiff{1, 1};
|
||||
auto padding_end = CoordinateDiff{1, 1};
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, padding_begin, padding_end, dilations);
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(PartialShape{1, 8, 447, 447}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_dyn_static_ranks_shape_inference_3)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
auto data =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 20, 224, 224});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
auto filters = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{Dimension::dynamic(), Dimension::dynamic(), 2, 3, 3});
|
||||
auto strides = Strides{2, 2};
|
||||
auto dilations = Strides{1, 1};
|
||||
auto padding_begin = CoordinateDiff{1, 1};
|
||||
auto padding_end = CoordinateDiff{1, 1};
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, padding_begin, padding_end, dilations);
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{Dimension::dynamic(), Dimension::dynamic(), 447, 447}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_dyn_static_ranks_shape_inference_4)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
auto data =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{1, Dimension::dynamic(), 224, 224});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
auto filters =
|
||||
make_shared<op::Parameter>(element::f32, PartialShape{Dimension::dynamic(), 5, 2, 3, 3});
|
||||
auto strides = Strides{2, 2};
|
||||
auto dilations = Strides{1, 1};
|
||||
auto padding_begin = CoordinateDiff{1, 1};
|
||||
auto padding_end = CoordinateDiff{1, 1};
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, padding_begin, padding_end, dilations);
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{1, Dimension::dynamic(), 447, 447}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_dyn_static_ranks_shape_inference_5)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
auto data = make_shared<op::Parameter>(element::f32, PartialShape{1, 20, 224, 224});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
auto filters = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{Dimension::dynamic(), Dimension::dynamic(), 2, 3, 3});
|
||||
auto strides = Strides{2, 2};
|
||||
auto dilations = Strides{1, 1};
|
||||
auto padding_begin = CoordinateDiff{1, 1};
|
||||
auto padding_end = CoordinateDiff{1, 1};
|
||||
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, strides, padding_begin, padding_end, dilations);
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{1, Dimension::dynamic(), 447, 447}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_data_batch)
|
||||
{
|
||||
const auto data = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(
|
||||
gcbd->get_output_partial_shape(0).same_scheme(PartialShape{Dimension::dynamic(), 2, 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_shape_dyn_data_batch)
|
||||
{
|
||||
auto data = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
auto filters = make_shared<op::Parameter>(element::f32, PartialShape{4, 5, 2, 3, 3});
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{Dimension::dynamic(), 8, Dimension::dynamic(), Dimension::dynamic()}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_filters)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{1, 16, Dimension::dynamic(), Dimension::dynamic()});
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(
|
||||
gcbd->get_output_partial_shape(0).same_scheme(PartialShape{1, Dimension::dynamic(), 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_shape_dyn_filters)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
auto data = make_shared<op::Parameter>(element::f32, PartialShape{1, 8, 224, 224});
|
||||
auto filters = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{4}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{1, Dimension::dynamic(), Dimension::dynamic(), Dimension::dynamic()}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_data_and_filters_1)
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{3}, {3, 3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_static());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().same_scheme(Rank{5}));
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(
|
||||
PartialShape{Dimension::dynamic(), Dimension::dynamic(), 3, 3, 3}));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_with_output_shape_dyn_data_and_filters_2)
|
||||
{
|
||||
const auto data = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
const auto output_shape = make_shared<op::Parameter>(element::i64, Shape{3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(PartialShape::dynamic()));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_dyn_data_and_filters)
|
||||
{
|
||||
auto data = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
auto filters = make_shared<op::Parameter>(element::f32, PartialShape::dynamic());
|
||||
auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).rank().is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).is_dynamic());
|
||||
ASSERT_TRUE(gcbd->get_output_partial_shape(0).same_scheme(PartialShape::dynamic()));
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_invalid_element_types)
|
||||
{
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{2, 8, 2, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f16, Shape{1, 16, 6, 6});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
// data and filters should be of same element type
|
||||
FAIL() << "Incompatible element types not detected";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
std::string("Element types for data batch and filters do not match"));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Element types validation check of inputs failed for unexpected reason";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::f16, Shape{2}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
// output shape input element type must be of integer type
|
||||
FAIL() << "Incompatible element types not detected";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
"Element type for output shape should be of integer type");
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Element types validation check of inputs failed for unexpected reason";
|
||||
}
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_invalid_input_ranks)
|
||||
{
|
||||
// data partial shape provided is rank 4 (Conv2D)
|
||||
// filter partial shape provided is rank 6 (Conv3D)
|
||||
try
|
||||
{
|
||||
const auto filters = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{2, 8, 2, 3, 3, Dimension::dynamic()});
|
||||
const auto data = make_shared<op::Parameter>(element::f32, PartialShape{1, 16, 6, 6});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
// data and weight have incompatible ranks
|
||||
FAIL() << "Incompatible input ranks not detected";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
std::string("Shapes for data batch and filters do not match."));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Rank validation check of inputs failed for unexpected reason";
|
||||
}
|
||||
|
||||
// data partial shape provided is rank 5 (Conv3D)
|
||||
// filter partial shape provided is rank 5 (Conv2D)
|
||||
try
|
||||
{
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, PartialShape{2, 8, 2, 3, 3});
|
||||
const auto data = make_shared<op::Parameter>(
|
||||
element::f32, PartialShape{1, Dimension::dynamic(), 16, 6, 6});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{}, CoordinateDiff{}, CoordinateDiff{}, Strides{});
|
||||
// data and weight have incompatible ranks
|
||||
FAIL() << "Incompatible input ranks not detected";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
std::string("Shapes for data batch and filters do not match."));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Rank validation check of inputs failed for unexpected reason";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{2, 1}, {3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
// Output shape optional input must be of rank 1
|
||||
FAIL() << "Incompatible output shape input rank not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
std::string("Spatial shape of output input must be of rank 1"));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Rank validation check of inputs failed for unexpected reason";
|
||||
}
|
||||
}
|
||||
|
||||
TEST(type_prop, group_conv_backprop_data_invalid_params)
|
||||
{
|
||||
// GROUPS x C_IN x C_OUT x kH x kW
|
||||
auto weights = make_shared<op::Parameter>(element::f32, Shape{21, 16, 20, 3, 3});
|
||||
// N x C_IN * GROUPS x H x W
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
|
||||
try
|
||||
{
|
||||
// filter shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{21, 16, 20, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(data,
|
||||
weights,
|
||||
filters,
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{2, 2},
|
||||
CoordinateDiff{2, 2},
|
||||
Strides{1, 1});
|
||||
EXPECT_FALSE(gcbd.get()) << "GroupConvolutionBackpropData:v1 validation did not work. "
|
||||
"Node was created with incorrect params.";
|
||||
// data batch shape does not have correct dimension C_IN * GROUPS
|
||||
FAIL() << "Incompatibile input shapes not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
std::string("Number of data channels not a multiple of group size."));
|
||||
}
|
||||
|
||||
// GROUPS x C_IN x C_OUT x kH x kW
|
||||
weights = make_shared<op::Parameter>(element::f32, Shape{4, 16, 20, 3, 3});
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Input shapes validation check failed for unexpected reason.";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{4, 16, 20, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(data,
|
||||
weights,
|
||||
filters,
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{2, 2},
|
||||
CoordinateDiff{2, 2},
|
||||
Strides{1, 1});
|
||||
EXPECT_FALSE(gcbd.get()) << "GroupConvolutionBackpropData:v1 validation did not work. "
|
||||
"Node was created with incorrect params.";
|
||||
// filter shape specifies GROUPS = 4 and C_IN = 16, while data batch shape specifies
|
||||
// dimension C_IN * GROUPS = 16
|
||||
FAIL() << "Incompatibile input shapes not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
@@ -124,16 +535,42 @@ TEST(type_prop, group_conv_backprop_data_invalid_params)
|
||||
std::string("Data second dimension has incompatible value "
|
||||
"with number of input channels."));
|
||||
}
|
||||
|
||||
// GROUPS x C_IN x C_OUT x kH x kW
|
||||
weights = make_shared<op::Parameter>(element::f32, Shape{4, 4, 20, 3, 3});
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Input shapes validation check failed for unexpected reason.";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{2, 8, 2, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 6, 6});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, weights, Strides{1}, CoordinateDiff{2, 2}, CoordinateDiff{2, 2}, Strides{1, 1});
|
||||
EXPECT_FALSE(gcbd.get()) << "GroupConvolutionBackpropData:v1 validation did not work. "
|
||||
"Node was created with incorrect params.";
|
||||
data, filters, Strides{}, CoordinateDiff{1}, CoordinateDiff{1, 1}, Strides{});
|
||||
// pads_begin and pads_end do not match spatial dimensions
|
||||
FAIL() << "Incompatible pads number of spatial dimensions not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(),
|
||||
"Pads should be defined for all and only spatial features.");
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Pads validation check failed for unexpected reason.";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{4, 4, 20, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, Strides{1}, CoordinateDiff{2, 2}, CoordinateDiff{2, 2}, Strides{1, 1});
|
||||
// Strides have incompatible number of spatial dimensions
|
||||
FAIL() << "Incompatible stride number of spatial dimensions not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
@@ -141,17 +578,25 @@ TEST(type_prop, group_conv_backprop_data_invalid_params)
|
||||
error.what(),
|
||||
std::string("Strides should be defined for all and only spatial features."));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Strides validation check failed for unexpected reason.";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{4, 4, 20, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(data,
|
||||
weights,
|
||||
filters,
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{2, 2},
|
||||
CoordinateDiff{2, 2},
|
||||
Strides{1, 1, 1});
|
||||
EXPECT_FALSE(gcbd.get()) << "GroupConvolutionBackpropData:v1 validation did not work. "
|
||||
"Node was created with incorrect params.";
|
||||
// Dilations have incompatible number of spatial dimensions
|
||||
FAIL() << "Incompatible dilations number of spatial dimensions not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
@@ -159,19 +604,27 @@ TEST(type_prop, group_conv_backprop_data_invalid_params)
|
||||
error.what(),
|
||||
std::string("Dilations should be defined for all and only spatial features."));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Dilations validation check failed for unexpected reason.";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{4, 4, 20, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(data,
|
||||
weights,
|
||||
filters,
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{2, 2},
|
||||
CoordinateDiff{2, 2},
|
||||
Strides{1, 1},
|
||||
op::PadType::EXPLICIT,
|
||||
CoordinateDiff{0});
|
||||
EXPECT_FALSE(gcbd.get()) << "GroupConvolutionBackpropData:v1 validation did not work. "
|
||||
"Node was created with incorrect params.";
|
||||
// Output padding have incompatible number of spatial dimensions
|
||||
FAIL() << "Incompatible output padding number of spatial dimensions not detected.";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
@@ -179,4 +632,30 @@ TEST(type_prop, group_conv_backprop_data_invalid_params)
|
||||
error.what(),
|
||||
std::string("Output padding should be defined for all and only spatial features."));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Output padding validation check failed for unexpected reason.";
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// filters shape: [GROUPS, C_IN, C_OUT, kH, kW]
|
||||
const auto filters = make_shared<op::Parameter>(element::f32, Shape{1, 16, 2, 3, 3});
|
||||
// data batch shape: [N, C_IN * GROUPS, H, W]
|
||||
const auto data = make_shared<op::Parameter>(element::f32, Shape{1, 16, 5, 5});
|
||||
const auto output_shape = op::Constant::create(element::i64, Shape{3}, {3, 3, 3});
|
||||
const auto gcbd = make_shared<op::v1::GroupConvolutionBackpropData>(
|
||||
data, filters, output_shape, Strides{}, Strides{}, op::PadType::SAME_UPPER);
|
||||
FAIL() << "Incompatible output shape optional input not detected";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(
|
||||
error.what(),
|
||||
std::string("Output shape should be specified only and for all spatial dimensions."));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Output shape validation check failed for unexpected reason.";
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user