Avgpool revise (#3674)

* Update the spec

* add unit-tests

* add avgPool unit-tests to CMakelist

* Remove second constructor and change the first one to take default values for rounding_type and pad_type

* add type_prop test for default values

* add 5d input single layer test instances

* add type_prop tests

* Require input to be 4D or 5D

* add validation check for pads size

* Update few tests to take 5D input instead of 6D

* Update validate_and_infer_types method

* Update infer_batched_pooling_forward and try_apply_auto_padding methods

* Update auto_padding_spatial_dims_dynamic type_prop test for binary_conv, conv, deformable_conv, group_conv and max_pool

* style-apply

* add validation check for kernel size

* add xfail for avgpool python backend test

* style-apply

* remove avgpool backend test from xfail list

* Update spec

* Allow the 3D input

* Update type_prop test with 3D input

* style-apply

* Remove xfail_issue_38709

* fix typo

* Update spec

* Update outputs section in spec

* Update spec

* fix typo

* clean file

* Update detailed description and fix xml examples

* fix exclude-type typo

* fix typo in outputs section
This commit is contained in:
Piotr Szmelczynski 2021-01-13 22:23:26 +01:00 committed by GitHub
parent b9447dfedf
commit 9ddbfac6b1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 545 additions and 202 deletions

View File

@ -6,7 +6,11 @@
**Short description**: [Reference](http://caffe.berkeleyvision.org/tutorial/layers/pooling.html)
**Detailed description**: [Reference](http://cs231n.github.io/convolutional-networks/#pool)
**Detailed description**: [Reference](http://cs231n.github.io/convolutional-networks/#pool). Average Pool is a pooling operation that performs down-sampling by dividing the input into pooling regions of size specified by kernel attribute and computing the average values of each region. Output shape is calculated as follows:
`H_out = (H + pads_begin[0] + pads_end[0] - kernel[0] / strides[0]) + 1`
`W_out = (H + pads_begin[1] + pads_end[1] - kernel[1] / strides[1]) + 1`
`D_out = (H + pads_begin[2] + pads_end[2] - kernel[2] / strides[2]) + 1`
**Attributes**: *Pooling* attributes are specified in the `data` node, which is a child of the layer node.
@ -44,9 +48,9 @@
* **Default value**: None
* **Required**: *yes*
* *exclude-pad*
* *exclude_pad*
* **Description**: *exclude-pad* is a type of pooling strategy for values in the padding area. For example, if *exclude-pad* is "true", zero-values in the padding are not used.
* **Description**: *exclude_pad* is a type of pooling strategy for values in the padding area. For example, if *exclude_pad* is "true", then zero-values that came from padding are not included in averaging calculation.
* **Range of values**: true or false
* **Type**: boolean
* **Default value**: None
@ -60,6 +64,7 @@
* *floor*
* **Type**: string
* **Default value**: *floor*
* **Required**: *no*
* *auto_pad*
@ -68,13 +73,16 @@
* *same_upper (same_lower)* the input is padded to match the output size. In case of odd padding value an extra padding is added at the end (at the beginning).
* *valid* - do not use padding.
* **Type**: string
* **Default value**: None
* **Default value**: *explicit*
* **Required**: *no*
* **Note**: *pads_begin* and *pads_end* attributes are ignored when *auto_pad* is specified.
**Inputs**:
* **1**: 4D or 5D input tensor. Required.
* **1**: 3D, 4D or 5D input tensor. Required.
**Outputs**:
* **1**: Input shape can be either `[N,C,H]`, `[N,C,H,W]` or `[N,C,H,W,D]`. Then the corresponding output shape is `[N,C,H_out]`, `[N,C,H_out,W_out]` or `[N,C,H_out,W_out,D_out]`.
**Mathematical Formulation**
@ -82,12 +90,106 @@
output_{j} = \frac{\sum_{i = 0}^{n}x_{i}}{n}
\f]
**Example**
**Examples**
```xml
<layer ... type="AvgPool" ... >
<data auto_pad="same_upper" exclude-pad="true" kernel="3,3" pads_begin="0,0" pads_end="1,1" strides="2,2"/>
<input> ... </input>
<output> ... </output>
<data auto_pad="same_upper" exclude_pad="true" kernel="2,2" pads_begin="0,0" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</output>
</layer>
```
<layer ... type="AvgPool" ... >
<data auto_pad="same_upper" exclude_pad="false" kernel="5,5" pads_begin="0,0" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</output>
</layer>
<layer ... type="AvgPool" ... >
<data auto_pad="explicit" exclude_pad="true" kernel="5,5" pads_begin="1,1" pads_end="1,1" strides="3,3"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>10</dim>
<dim>10</dim>
</port>
</output>
</layer>
<layer ... type="AvgPool" ... >
<data auto_pad="explicit" exclude_pad="false" kernel="5,5" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>15</dim>
<dim>15</dim>
</port>
</output>
</layer>
<layer ... type="AvgPool" ... >
<data auto_pad="valid" exclude_pad="true" kernel="5,5" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>14</dim>
<dim>14</dim>
</port>
</output>
</layer>
```

View File

@ -31,6 +31,7 @@ const std::vector<std::vector<size_t >> strides = {{1, 1},
{1, 2}};
const std::vector<std::vector<size_t >> strides3D = {{1, 1, 1},
{2, 2, 2}};
const std::vector<std::vector<size_t >> stridess3D = {{2, 2, 2}};
const std::vector<std::vector<size_t >> padBegins = {{0, 0},
{0, 2}};
const std::vector<std::vector<size_t >> padBegins3D = {{0, 0, 0}};
@ -277,6 +278,78 @@ INSTANTIATE_TEST_CASE_P(smoke_AvgPool_ExplicitPad_FloorRounding, PoolingLayerTes
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);
/* ========== Explicit Pad Floor Rounding 5D input========== */
const auto avgPool_ExplicitPad_FloorRounding_5Dinput_Params = ::testing::Combine(
::testing::Values(ngraph::helpers::PoolingTypes::AVG),
::testing::ValuesIn(kernel3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::Values(ngraph::op::RoundingType::FLOOR),
::testing::Values(ngraph::op::PadType::EXPLICIT),
::testing::Values(true, false)
);
INSTANTIATE_TEST_CASE_P(smoke_AvgPool_ExplicitPad_FloorRounding_5Dinput, PoolingLayerTest,
::testing::Combine(
avgPool_ExplicitPad_FloorRounding_5Dinput_Params,
::testing::ValuesIn(netPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({32, 32, 2, 2, 4})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);
/* ========== Same Upper Pad Floor Rounding 5D input========== */
const auto avgPool_SameUpperPad_FloorRounding_5Dinput_Params = ::testing::Combine(
::testing::Values(ngraph::helpers::PoolingTypes::AVG),
::testing::ValuesIn(kernel3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::Values(ngraph::op::RoundingType::FLOOR),
::testing::Values(ngraph::op::PadType::SAME_UPPER),
::testing::Values(true)
);
INSTANTIATE_TEST_CASE_P(smoke_AvgPool_SameUpperPad_FloorRounding_5Dinput, PoolingLayerTest,
::testing::Combine(
avgPool_SameUpperPad_FloorRounding_5Dinput_Params,
::testing::ValuesIn(netPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({32, 32, 2, 2, 4})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);
/* ========== Same Lower Pad Ceil Rounding 5D input========== */
const auto avgPool_SameLowerPad_CeilRounding_5Dinput_Params = ::testing::Combine(
::testing::Values(ngraph::helpers::PoolingTypes::AVG),
::testing::ValuesIn(kernel3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::Values(ngraph::op::RoundingType::CEIL),
::testing::Values(ngraph::op::PadType::SAME_LOWER),
::testing::Values(true)
);
INSTANTIATE_TEST_CASE_P(smoke_AvgPool_SameLowerPad_CeilRounding_5Dinput, PoolingLayerTest,
::testing::Combine(
avgPool_SameLowerPad_CeilRounding_5Dinput_Params,
::testing::ValuesIn(netPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({32, 32, 2, 2, 2})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);
////* ========== Avg and Max Polling Cases ========== */
/* ========== Valid Pad Rounding Not Applicable ========== */
const auto allPools_ValidPad_Params = ::testing::Combine(
@ -302,8 +375,4 @@ INSTANTIATE_TEST_CASE_P(smoke_MAX_and_AVGPool_ValidPad, PoolingLayerTest,
::testing::Values(std::vector<size_t >({1, 3, 30, 30})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);
} // namespace

View File

@ -58,32 +58,8 @@ namespace ngraph
const Shape& pads_end,
const Shape& kernel,
bool exclude_pad,
op::RoundingType rounding_type,
const PadType& auto_pad);
///
/// \brief Constructs a batched average pooling operation.
///
/// \param arg The output producing the input data batch tensor.<br>
/// `[d1, dn]`
/// \param strides The strides.<br> `[n]`
/// \param pads_begin The beginning of padding shape.<br> `[n]`
/// \param pads_end The end of padding shape.<br> `[n]`
/// \param kernel The kernel shape.<br> `[n]`
/// \param exclude_pad If false then averages include padding elements, each
/// treated as the number zero. If true, padding
/// elements
/// are entirely ignored when computing averages.
/// \param rounding_type Whether to use ceiling or floor rounding type while
/// computing output shape.
///
AvgPool(const Output<Node>& arg,
const Strides& strides,
const Shape& pads_begin,
const Shape& pads_end,
const Shape& kernel,
bool exclude_pad,
op::RoundingType rounding_type);
op::RoundingType rounding_type = op::RoundingType::FLOOR,
const PadType& auto_pad = op::PadType::EXPLICIT);
size_t get_version() const override { return 1; }
void validate_and_infer_types() override;

View File

@ -46,24 +46,6 @@ op::v1::AvgPool::AvgPool(const Output<Node>& arg,
constructor_validate_and_infer_types();
}
op::v1::AvgPool::AvgPool(const Output<Node>& arg,
const Strides& strides,
const Shape& pads_begin,
const Shape& pads_end,
const Shape& kernel,
bool exclude_pad,
op::RoundingType rounding_type)
: AvgPool(arg,
strides,
pads_begin,
pads_end,
kernel,
exclude_pad,
rounding_type,
op::PadType::EXPLICIT)
{
}
bool op::v1::AvgPool::visit_attributes(AttributeVisitor& visitor)
{
NGRAPH_OP_SCOPE(v1_AvgPool_visit_attributes);
@ -96,24 +78,53 @@ void op::v1::AvgPool::validate_and_infer_types()
}
const PartialShape& arg_shape = get_input_partial_shape(0);
NODE_VALIDATION_CHECK(this,
arg_shape.rank().compatible(3) || arg_shape.rank().compatible(4) ||
arg_shape.rank().compatible(5),
"Expected a 3D, 4D or 5D tensor for the input. Got: ",
arg_shape);
if (arg_shape.rank().is_static())
{
NODE_VALIDATION_CHECK(this,
m_pads_end.size() == arg_shape.rank().get_max_length() - 2,
"Expected pads_end size to be equal to input size - 2. Got: ",
m_pads_end.size());
NODE_VALIDATION_CHECK(this,
m_pads_begin.size() == arg_shape.rank().get_max_length() - 2,
"Expected pads_begin size to be equal to input size - 2. Got: ",
m_pads_begin.size());
NODE_VALIDATION_CHECK(this,
m_kernel.size() == arg_shape.rank().get_max_length() - 2,
"Expected kernel size to be equal to input size - 2. Got: ",
m_kernel.size());
NODE_VALIDATION_CHECK(this,
m_strides.size() == arg_shape.rank().get_max_length() - 2,
"Expected strides size to be equal to input size - 2. Got: ",
m_kernel.size());
}
auto output_shape = PartialShape::dynamic();
if (arg_shape.rank().is_static())
{
output_shape = std::vector<Dimension>(arg_shape.rank().get_length(), Dimension::dynamic());
if (arg_shape.rank().get_length() > 1)
output_shape =
std::vector<Dimension>(arg_shape.rank().get_max_length(), Dimension::dynamic());
if (arg_shape[0].is_static())
{
output_shape[0] = arg_shape[0]; // batch size
}
if (arg_shape.rank().get_length() > 2)
if (arg_shape[1].is_static())
{
output_shape[1] = arg_shape[1]; // channel size
}
}
bool update_auto_padding_succeed = true;
if (m_auto_pad == PadType::SAME_UPPER || m_auto_pad == PadType::SAME_LOWER)
{
CoordinateDiff pads_end, pads_begin;
CoordinateDiff pads_end;
CoordinateDiff pads_begin;
update_auto_padding_succeed =
try_apply_auto_padding(arg_shape,
m_kernel,
@ -125,12 +136,15 @@ void op::v1::AvgPool::validate_and_infer_types()
m_pads_end = Shape(pads_end.begin(), pads_end.end());
m_pads_begin = Shape(pads_begin.begin(), pads_begin.end());
}
if (m_auto_pad == PadType::VALID)
{
m_pads_end = Shape(m_pads_end.size(), 0);
m_pads_begin = Shape(m_pads_begin.size(), 0);
}
// infer_batched_forward_pooling wants CoordinateDiffs for these, while the pooling ops for
// now still take Shape (no negative padding).
CoordinateDiff pads_begin(m_pads_begin.begin(), m_pads_begin.end());
CoordinateDiff pads_end(m_pads_end.begin(), m_pads_end.end());
set_output_type(0,
get_input_element_type(0),
update_auto_padding_succeed

View File

@ -126,7 +126,6 @@ PartialShape ngraph::infer_windowed_reduction_output_shape(const Node* node,
") do not match.");
PartialShape output_shape = PartialShape::dynamic(data_shape_merged.rank());
if (output_shape.rank().is_static())
{
for (size_t i = 0; i < output_shape.rank().get_length(); i++)
@ -389,9 +388,10 @@ PartialShape ngraph::infer_batched_pooling_forward(const Node* node,
{
NODE_VALIDATION_CHECK(node,
data_batch_shape.rank().is_dynamic() ||
data_batch_shape.rank().get_length() >= 3,
"Data batch must have rank of at least 3 (one batch axis, ",
"one input-channel axis, and at least one spatial dimension) ",
data_batch_shape.rank().get_length() >= 3 &&
data_batch_shape.rank().get_length() <= 5,
"Data batch must have rank of at least 4 or 5 (one batch axis, ",
"one input-channel axis, and two or three spatial dimension) ",
"(data batch shape: ",
data_batch_shape,
").");
@ -442,7 +442,6 @@ PartialShape ngraph::infer_batched_pooling_forward(const Node* node,
// For pooling ops we don't need dilation, so we fill in the identity value (all 1).
Strides data_dilation(data_spatial_shape.rank().get_length(), 1);
Strides window_dilation(data_spatial_shape.rank().get_length(), 1);
data_output_spatial_shape =
infer_windowed_reduction_output_shape(node,
data_spatial_shape,
@ -640,28 +639,30 @@ bool ngraph::try_apply_auto_padding(const PartialShape& image_shape,
return false;
}
const auto image_dims = static_cast<std::vector<Dimension>>(image_shape);
const bool are_spatial_dims_static =
std::all_of(std::begin(image_dims) + 2, std::end(image_dims), [](const Dimension& dim) {
return dim.is_static();
});
if (!are_spatial_dims_static)
{
return false;
}
for (size_t i = 0; i < static_cast<size_t>(filter_shape.size()); i++)
{
int64_t image_size = static_cast<int64_t>(image_dims[i + 2].get_length());
int64_t filter_size = (static_cast<int64_t>(filter_shape[i]) - 1) * filter_dilations[i] + 1;
int64_t filter_stride = static_cast<int64_t>(filter_strides[i]);
auto output_size = (image_size + filter_stride - 1) / filter_stride;
if (image_dims[i + 2].is_static())
{
int64_t image_size = static_cast<int64_t>(image_dims[i + 2].get_length());
int64_t filter_size =
(static_cast<int64_t>(filter_shape[i]) - 1) * filter_dilations[i] + 1;
int64_t filter_stride = static_cast<int64_t>(filter_strides[i]);
auto output_size = (image_size + filter_stride - 1) / filter_stride;
auto padding_needed =
std::max(int64_t(0), (output_size - 1) * filter_stride + filter_size - image_size);
auto padding_lhs = padding_needed / 2;
auto padding_rhs = padding_needed - padding_lhs;
padding_below.push_back(pad_type == op::PadType::SAME_UPPER ? padding_lhs : padding_rhs);
padding_above.push_back(pad_type == op::PadType::SAME_UPPER ? padding_rhs : padding_lhs);
auto padding_needed =
std::max(int64_t(0), (output_size - 1) * filter_stride + filter_size - image_size);
auto padding_lhs = padding_needed / 2;
auto padding_rhs = padding_needed - padding_lhs;
padding_below.push_back(pad_type == op::PadType::SAME_UPPER ? padding_lhs
: padding_rhs);
padding_above.push_back(pad_type == op::PadType::SAME_UPPER ? padding_rhs
: padding_lhs);
}
else
{
padding_below.push_back(0);
padding_above.push_back(0);
}
}
return true;
}

View File

@ -251,6 +251,7 @@ set(MULTI_TEST_SRC
backend/atan.in.cpp
backend/atanh.in.cpp
backend/auto_broadcast.in.cpp
backend/avg_pool.in.cpp
backend/batch_norm.in.cpp
backend/broadcast.in.cpp
backend/builder_reduce_ops_opset1.in.cpp

View File

@ -0,0 +1,193 @@
//*****************************************************************************
// Copyright 2017-2020 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//*****************************************************************************
// clang-format off
#ifdef ${BACKEND_NAME}_FLOAT_TOLERANCE_BITS
#define DEFAULT_FLOAT_TOLERANCE_BITS ${BACKEND_NAME}_FLOAT_TOLERANCE_BITS
#endif
#ifdef ${BACKEND_NAME}_DOUBLE_TOLERANCE_BITS
#define DEFAULT_DOUBLE_TOLERANCE_BITS ${BACKEND_NAME}_DOUBLE_TOLERANCE_BITS
#endif
// clang-format on
#include "gtest/gtest.h"
#include "ngraph/ngraph.hpp"
#include "util/engine/test_engines.hpp"
#include "util/test_case.hpp"
#include "util/test_control.hpp"
using namespace std;
using namespace ngraph;
static string s_manifest = "${MANIFEST}";
using TestEngine = test::ENGINE_CLASS_NAME(${BACKEND_NAME});
NGRAPH_TEST(${BACKEND_NAME}, avg_pool_2d_floor)
{
Shape in_shape{1, 1, 3, 3};
Shape out_shape{1, 1, 2, 2};
const Strides& strides{1, 1};
const Shape& pads_begin{0, 0};
const Shape& pads_end{0, 0};
const Shape& kernel{2, 2};
const bool exclude_pad = true;
const op::RoundingType rounding_type = op::RoundingType::FLOOR;
const op::PadType pad_type = op::PadType::NOTSET;
auto A = make_shared<op::Parameter>(element::f32, in_shape);
auto avgPool = make_shared<op::v1::AvgPool>(
A, strides, pads_begin, pads_end, kernel, exclude_pad, rounding_type, pad_type);
auto f = make_shared<Function>(avgPool, ParameterVector{A});
std::vector<float> a{1, 2, 3, 4, 5, 6, 7, 8, 9};
std::vector<float> result{3, 4, 6, 7};
auto test_case = test::TestCase<TestEngine>(f);
test_case.add_input<float>({a});
test_case.add_expected_output<float>(out_shape, result);
test_case.run();
}
NGRAPH_TEST(${BACKEND_NAME}, avg_pool_2d_ceil)
{
Shape in_shape{1, 1, 4, 4};
Shape out_shape{1, 1, 2, 2};
const Strides& strides{1, 1};
const Shape& pads_begin{0, 0};
const Shape& pads_end{0, 0};
const Shape& kernel{3, 3};
const bool exclude_pad = true;
const op::RoundingType rounding_type = op::RoundingType::CEIL;
const op::PadType pad_type = op::PadType::NOTSET;
auto A = make_shared<op::Parameter>(element::f32, in_shape);
auto avgPool = make_shared<op::v1::AvgPool>(
A, strides, pads_begin, pads_end, kernel, exclude_pad, rounding_type, pad_type);
auto f = make_shared<Function>(avgPool, ParameterVector{A});
std::vector<float> a{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
std::vector<float> result{6, 7, 10, 11};
auto test_case = test::TestCase<TestEngine>(f);
test_case.add_input<float>({a});
test_case.add_expected_output<float>(out_shape, result);
test_case.run();
}
NGRAPH_TEST(${BACKEND_NAME}, avg_pool_2d_pad)
{
Shape in_shape{1, 1, 2, 2};
Shape out_shape{1, 1, 3, 3};
const Strides& strides{1, 1};
const Shape& pads_begin{1, 1};
const Shape& pads_end{1, 1};
const Shape& kernel{2, 2};
const bool exclude_pad = true;
const op::RoundingType rounding_type = op::RoundingType::CEIL;
const op::PadType pad_type = op::PadType::NOTSET;
auto A = make_shared<op::Parameter>(element::f32, in_shape);
auto avgPool = make_shared<op::v1::AvgPool>(
A, strides, pads_begin, pads_end, kernel, exclude_pad, rounding_type, pad_type);
auto f = make_shared<Function>(avgPool, ParameterVector{A});
std::vector<float> a{1, 2, 3, 4};
std::vector<float> result{1, 1.5, 2, 2, 2.5, 3, 3, 3.5, 4};
auto test_case = test::TestCase<TestEngine>(f);
test_case.add_input<float>({a});
test_case.add_expected_output<float>(out_shape, result);
test_case.run();
}
NGRAPH_TEST(${BACKEND_NAME}, avg_pool_2d_same_upper)
{
Shape in_shape{1, 1, 3, 3};
Shape out_shape{1, 1, 3, 3};
const Strides& strides{1, 1};
const Shape& pads_begin{0, 0};
const Shape& pads_end{0, 0};
const Shape& kernel{2, 2};
const bool exclude_pad = false;
const op::RoundingType rounding_type = op::RoundingType::CEIL;
const op::PadType pad_type = op::PadType::SAME_UPPER;
auto A = make_shared<op::Parameter>(element::f32, in_shape);
auto avgPool = make_shared<op::v1::AvgPool>(
A, strides, pads_begin, pads_end, kernel, exclude_pad, rounding_type, pad_type);
auto f = make_shared<Function>(avgPool, ParameterVector{A});
std::vector<float> a{1, 2, 3, 4, 5, 6, 7, 8, 9};
std::vector<float> result{3, 4, 2.25, 6, 7, 3.75, 3.75, 4.25, 2.25};
auto test_case = test::TestCase<TestEngine>(f);
test_case.add_input<float>({a});
test_case.add_expected_output<float>(out_shape, result);
test_case.run();
}
NGRAPH_TEST(${BACKEND_NAME}, avg_pool_3d)
{
Shape in_shape{1, 1, 2, 2, 2};
Shape out_shape{1, 1, 2, 2, 1};
const Strides& strides{1, 1, 1};
const Shape& pads_begin{0, 0, 0};
const Shape& pads_end{0, 0, 0};
const Shape& kernel{1, 1, 2};
const bool exclude_pad = true;
const op::RoundingType rounding_type = op::RoundingType::CEIL;
const op::PadType pad_type = op::PadType::VALID;
auto A = make_shared<op::Parameter>(element::f32, in_shape);
auto avgPool = make_shared<op::v1::AvgPool>(
A, strides, pads_begin, pads_end, kernel, exclude_pad, rounding_type, pad_type);
auto f = make_shared<Function>(avgPool, ParameterVector{A});
std::vector<float> a{1, 2, 3, 4, 5, 6, 7, 8};
std::vector<float> result{1.5, 3.5, 5.5, 7.5};
auto test_case = test::TestCase<TestEngine>(f);
test_case.add_input<float>({a});
test_case.add_expected_output<float>(out_shape, result);
test_case.run();
}
NGRAPH_TEST(${BACKEND_NAME}, avg_pool_2d_same_lower)
{
Shape in_shape{1, 1, 3, 3};
Shape out_shape{1, 1, 3, 3};
const Strides& strides{1, 1};
const Shape& pads_begin{0, 0};
const Shape& pads_end{0, 0};
const Shape& kernel{2, 2};
const bool exclude_pad = false;
const op::RoundingType rounding_type = op::RoundingType::CEIL;
const op::PadType pad_type = op::PadType::SAME_LOWER;
auto A = make_shared<op::Parameter>(element::f32, in_shape);
auto avgPool = make_shared<op::v1::AvgPool>(
A, strides, pads_begin, pads_end, kernel, exclude_pad, rounding_type, pad_type);
auto f = make_shared<Function>(avgPool, ParameterVector{A});
std::vector<float> a{1, 2, 3, 4, 5, 6, 7, 8, 9};
std::vector<float> result{0.25, 0.75, 1.25, 1.25, 3, 4, 2.75, 6, 7};
auto test_case = test::TestCase<TestEngine>(f);
test_case.add_input<float>({a});
test_case.add_expected_output<float>(out_shape, result);
test_case.run();
}

View File

@ -23,11 +23,11 @@ using namespace ngraph;
TEST(type_prop, avg_pool_auto_padding)
{
const PartialShape arg_shape{1, 3, 32, 32};
const Strides strides{1, 1};
const Shape pads_begin{0, 0};
const Shape pads_end{0, 0};
const Shape kernel_shape{2, 2};
const PartialShape arg_shape{1, 3, 32};
const Strides strides{1};
const Shape pads_begin{0};
const Shape pads_end{0};
const Shape kernel_shape{2};
const bool exclude_pad = false;
const auto rounding_mode = op::RoundingType::FLOOR;
const auto auto_pad = op::PadType::SAME_LOWER;
@ -36,12 +36,32 @@ TEST(type_prop, avg_pool_auto_padding)
auto mp = make_shared<op::v1::AvgPool>(
arg, strides, pads_begin, pads_end, kernel_shape, exclude_pad, rounding_mode, auto_pad);
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme({1, 3, 32, 32}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{1, 1}));
ASSERT_EQ(mp->get_pads_end(), (Shape{0, 0}));
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme({1, 3, 32}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{1}));
ASSERT_EQ(mp->get_pads_end(), (Shape{0}));
}
TEST(type_prop, avg_pool_auto_padding_nc_dims_dynamic_same_lower)
TEST(type_prop, avg_pool_auto_padding_3D_nc_dims_dynamic_same_lower)
{
const PartialShape arg_shape{Dimension::dynamic(), 32, 32};
const Strides strides{1};
const Shape pads_begin{0};
const Shape pads_end{0};
const Shape kernel_shape{2};
const bool exclude_pad = true;
const auto rounding_mode = op::RoundingType::FLOOR;
const auto auto_pad = op::PadType::SAME_LOWER;
auto arg = make_shared<op::Parameter>(element::f32, arg_shape);
auto mp = make_shared<op::v1::AvgPool>(
arg, strides, pads_begin, pads_end, kernel_shape, exclude_pad, rounding_mode, auto_pad);
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme({Dimension::dynamic(), 32, 32}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{1}));
ASSERT_EQ(mp->get_pads_end(), (Shape{0}));
}
TEST(type_prop, avg_pool_auto_padding_4D_nc_dims_dynamic_same_lower)
{
const PartialShape arg_shape{Dimension::dynamic(), Dimension::dynamic(), 32, 32};
const Strides strides{1, 1};
@ -87,7 +107,7 @@ TEST(type_prop, avg_pool_auto_padding_spatial_dims_dynamic)
{
const PartialShape arg_shape{1, 3, 32, Dimension::dynamic()};
const Strides strides{1, 1};
const Shape pads_begin{0, 0};
const Shape pads_begin{1, 1};
const Shape pads_end{0, 0};
const Shape kernel_shape{2, 2};
const bool exclude_pad = true;
@ -98,77 +118,48 @@ TEST(type_prop, avg_pool_auto_padding_spatial_dims_dynamic)
auto mp = make_shared<op::v1::AvgPool>(
arg, strides, pads_begin, pads_end, kernel_shape, exclude_pad, rounding_mode, auto_pad);
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme(
{1, 3, Dimension::dynamic(), Dimension::dynamic()}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{}));
ASSERT_EQ(mp->get_pads_end(), (Shape{}));
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme({1, 3, 32, Dimension::dynamic()}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{1, 0}));
ASSERT_EQ(mp->get_pads_end(), (Shape{0, 0}));
}
TEST(type_prop, avg_pool_1d_deduce)
{
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3, 100});
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3});
const Shape kernel{10};
const auto avg_pool = make_shared<op::v1::AvgPool>(
param, Strides{1}, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR);
EXPECT_EQ(avg_pool->get_output_element_type(0), element::f32);
EXPECT_EQ(avg_pool->get_output_shape(0), (Shape{64, 3, 91}));
EXPECT_EQ(avg_pool->get_strides(), Strides{1});
EXPECT_EQ(avg_pool->get_kernel(), Shape{10});
EXPECT_EQ(avg_pool->get_pads_begin(), Shape{0});
EXPECT_EQ(avg_pool->get_pads_end(), Shape{0});
EXPECT_THROW(make_shared<op::v1::AvgPool>(
param, Strides{1}, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR),
NodeValidationFailure);
}
TEST(type_prop, avg_pool_1d_deduce_strided)
{
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3, 100});
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3});
const Shape kernel{10};
const auto move_strides = Strides{2};
const auto avg_pool = make_shared<op::v1::AvgPool>(
param, move_strides, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR);
EXPECT_EQ(avg_pool->get_output_element_type(0), element::f32);
EXPECT_EQ(avg_pool->get_output_shape(0), (Shape{64, 3, 46}));
EXPECT_EQ(avg_pool->get_strides(), Strides{2});
EXPECT_EQ(avg_pool->get_kernel(), Shape{10});
EXPECT_EQ(avg_pool->get_pads_begin(), Shape{0});
EXPECT_EQ(avg_pool->get_pads_end(), Shape{0});
EXPECT_THROW(make_shared<op::v1::AvgPool>(
param, move_strides, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR),
NodeValidationFailure);
}
TEST(type_prop, avg_pool_1d_deduce_strided_small_uneven)
{
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3, 5});
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3});
const Shape kernel{2};
const auto move_strides = Strides{2};
const auto avg_pool = make_shared<op::v1::AvgPool>(
param, move_strides, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR);
EXPECT_EQ(avg_pool->get_output_element_type(0), element::f32);
EXPECT_EQ(avg_pool->get_output_shape(0), (Shape{64, 3, 2}));
EXPECT_EQ(avg_pool->get_strides(), Strides{2});
EXPECT_EQ(avg_pool->get_kernel(), Shape{2});
EXPECT_EQ(avg_pool->get_pads_begin(), Shape{0});
EXPECT_EQ(avg_pool->get_pads_end(), Shape{0});
EXPECT_THROW(make_shared<op::v1::AvgPool>(
param, move_strides, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR),
NodeValidationFailure);
}
TEST(type_prop, avg_pool_1d_deduce_strided_small_even)
{
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3, 6});
const auto param = make_shared<op::Parameter>(element::f32, Shape{64, 3});
const Shape kernel{2};
const auto move_strides = Strides{2};
const auto avg_pool = make_shared<op::v1::AvgPool>(
param, move_strides, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR);
EXPECT_EQ(avg_pool->get_output_element_type(0), element::f32);
EXPECT_EQ(avg_pool->get_output_shape(0), (Shape{64, 3, 3}));
EXPECT_EQ(avg_pool->get_strides(), Strides{2});
EXPECT_EQ(avg_pool->get_kernel(), Shape{2});
EXPECT_EQ(avg_pool->get_pads_begin(), Shape{0});
EXPECT_EQ(avg_pool->get_pads_end(), Shape{0});
EXPECT_THROW(make_shared<op::v1::AvgPool>(
param, move_strides, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR),
NodeValidationFailure);
}
TEST(type_prop, avg_pool_2d_deduce)
@ -269,7 +260,7 @@ TEST(type_prop, avg_pool_invalid_2d_input)
TEST(type_prop, avg_pool_invalid_0_batch_size)
{
const auto param = make_shared<op::Parameter>(element::f32, Shape{0, 6, 1});
const auto param = make_shared<op::Parameter>(element::f32, Shape{0, 6});
const Shape kernel{1};
EXPECT_THROW(make_shared<op::v1::AvgPool>(
param, Strides{1}, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR),
@ -278,7 +269,7 @@ TEST(type_prop, avg_pool_invalid_0_batch_size)
TEST(type_prop, avg_pool_invalid_0_channels)
{
const auto param = make_shared<op::Parameter>(element::f32, Shape{6, 0, 1});
const auto param = make_shared<op::Parameter>(element::f32, Shape{6, 0});
const Shape kernel{1};
EXPECT_THROW(make_shared<op::v1::AvgPool>(
param, Strides{1}, Shape{}, Shape{}, kernel, true, op::RoundingType::FLOOR),
@ -433,11 +424,11 @@ TEST(type_prop, avg_pool_partial_rank_dynamic_attrib_rank_mismatch)
TEST(type_prop, avg_pool_partial_rank_static_dynamic_ok)
{
const PartialShape arg_shape{PartialShape::dynamic(6)};
const Shape kernel{2, 3, 4, 5};
const Strides window_movement_strides{1, 1, 1, 1};
const Shape pads_begin{0, 0, 0, 0};
const Shape pads_end{0, 0, 0, 0};
const PartialShape arg_shape{PartialShape::dynamic(5)};
const Shape kernel{2, 3, 4};
const Strides window_movement_strides{1, 1, 1};
const Shape pads_begin{0, 0, 0};
const Shape pads_end{0, 0, 0};
const auto param = make_shared<op::Parameter>(element::f32, arg_shape);
auto ap = make_shared<op::v1::AvgPool>(param,
@ -449,16 +440,16 @@ TEST(type_prop, avg_pool_partial_rank_static_dynamic_ok)
op::RoundingType::FLOOR);
ASSERT_EQ(ap->get_output_element_type(0), element::f32);
ASSERT_TRUE(ap->get_output_partial_shape(0).same_scheme(PartialShape::dynamic(6)));
ASSERT_TRUE(ap->get_output_partial_shape(0).same_scheme(PartialShape::dynamic(5)));
}
TEST(type_prop, avg_pool_partial_rank_static_dynamic_some_dims_known_ok)
{
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4, 7};
const Shape kernel{2, 3, 4, 5};
const Strides window_movement_strides{1, 1, 1, 1};
const Shape pads_begin{0, 0, 0, 0};
const Shape pads_end{0, 0, 0, 0};
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4};
const Shape kernel{2, 3, 4};
const Strides window_movement_strides{1, 1, 1};
const Shape pads_begin{0, 0, 0};
const Shape pads_end{0, 0, 0};
const auto param = make_shared<op::Parameter>(element::f32, arg_shape);
auto ap = make_shared<op::v1::AvgPool>(param,
@ -471,16 +462,16 @@ TEST(type_prop, avg_pool_partial_rank_static_dynamic_some_dims_known_ok)
ASSERT_EQ(ap->get_output_element_type(0), element::f32);
ASSERT_TRUE(ap->get_output_partial_shape(0).same_scheme(
PartialShape{5, Dimension::dynamic(), 7, Dimension::dynamic(), 1, 3}));
PartialShape{5, Dimension::dynamic(), 7, Dimension::dynamic(), 1}));
}
TEST(type_prop, avg_pool_partial_rank_static_dynamic_attrib_rank_mismatch)
{
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4, 7};
const Shape kernel{2, 3, 4, 5, 6};
const Strides window_movement_strides{1, 1, 1, 1};
const Shape pads_begin{0, 0, 0, 0};
const Shape pads_end{0, 0, 0, 0};
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4};
const Shape kernel{2, 3, 4, 5};
const Strides window_movement_strides{1, 1, 1};
const Shape pads_begin{0, 0, 0};
const Shape pads_end{0, 0, 0};
const auto param = make_shared<op::Parameter>(element::f32, arg_shape);
@ -496,11 +487,11 @@ TEST(type_prop, avg_pool_partial_rank_static_dynamic_attrib_rank_mismatch)
TEST(type_prop, avg_pool_partial_rank_static_dynamic_window_not_too_big)
{
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4, 7};
const Shape kernel{9, 3, 4, 5};
const Strides window_movement_strides{1, 1, 1, 1};
const Shape pads_begin{0, 0, 0, 0};
const Shape pads_end{0, 0, 0, 0};
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4};
const Shape kernel{9, 3, 4};
const Strides window_movement_strides{1, 1, 1};
const Shape pads_begin{0, 0, 0};
const Shape pads_end{0, 0, 0};
const auto param = make_shared<op::Parameter>(element::f32, arg_shape);
@ -516,11 +507,11 @@ TEST(type_prop, avg_pool_partial_rank_static_dynamic_window_not_too_big)
TEST(type_prop, avg_pool_partial_rank_static_dynamic_padded_window_not_too_big)
{
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4, 7};
const Shape kernel{9, 3, 4, 5};
const Strides window_movement_strides{1, 1, 1, 1};
const Shape pads_begin{0, 0, 0, 0};
const Shape pads_end{1, 0, 0, 0};
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4};
const Shape kernel{9, 3, 4};
const Strides window_movement_strides{1, 1, 1};
const Shape pads_begin{0, 0, 0};
const Shape pads_end{1, 0, 0};
const auto param = make_shared<op::Parameter>(element::f32, arg_shape);
auto ap = make_shared<op::v1::AvgPool>(param,
@ -533,16 +524,16 @@ TEST(type_prop, avg_pool_partial_rank_static_dynamic_padded_window_not_too_big)
ASSERT_EQ(ap->get_output_element_type(0), element::f32);
ASSERT_TRUE(ap->get_output_partial_shape(0).same_scheme(
PartialShape{5, Dimension::dynamic(), 1, Dimension::dynamic(), 1, 3}));
PartialShape{5, Dimension::dynamic(), 1, Dimension::dynamic(), 1}));
}
TEST(type_prop, avg_pool_partial_rank_static_dynamic_window_in_padding)
{
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4, 7};
const Shape kernel{9, 3, 4, 3};
const Strides window_movement_strides{1, 1, 1, 1};
const Shape pads_begin{0, 0, 0, 4};
const Shape pads_end{0, 0, 0, 0};
const PartialShape arg_shape{5, Dimension::dynamic(), 8, Dimension::dynamic(), 4};
const Shape kernel{9, 3, 4};
const Strides window_movement_strides{1, 1, 1};
const Shape pads_begin{0, 0, 0};
const Shape pads_end{0, 0, 0};
const auto param = make_shared<op::Parameter>(element::f32, arg_shape);

View File

@ -108,8 +108,7 @@ TEST(type_prop, binary_conv_v1_partial_auto_padding_same_spatial_dims_dynamic)
auto conv = make_shared<op::v1::BinaryConvolution>(
data_batch, filters, strides, pads_begin, pads_end, dilations, mode, pad_value, auto_pad);
ASSERT_TRUE(conv->get_output_partial_shape(0).same_scheme(
{1, 1, Dimension::dynamic(), Dimension::dynamic()}));
ASSERT_EQ(conv->get_pads_begin(), (CoordinateDiff{}));
ASSERT_EQ(conv->get_pads_end(), (CoordinateDiff{}));
ASSERT_TRUE(conv->get_output_partial_shape(0).same_scheme({1, 1, Dimension::dynamic(), 5}));
ASSERT_EQ(conv->get_pads_begin(), (CoordinateDiff{0, 1}));
ASSERT_EQ(conv->get_pads_end(), (CoordinateDiff{0, 1}));
}

View File

@ -2638,10 +2638,9 @@ TEST(type_prop, conv_v1_partial_auto_padding_same_spatial_dims_dynamic)
auto conv = make_shared<op::v1::Convolution>(
data_batch, filters, strides, pads_begin, pads_end, dilations, auto_pad);
ASSERT_TRUE(conv->get_output_partial_shape(0).same_scheme(
{1, 1, Dimension::dynamic(), Dimension::dynamic()}));
ASSERT_EQ(conv->get_pads_begin(), (CoordinateDiff{}));
ASSERT_EQ(conv->get_pads_end(), (CoordinateDiff{}));
ASSERT_TRUE(conv->get_output_partial_shape(0).same_scheme({1, 1, Dimension::dynamic(), 5}));
ASSERT_EQ(conv->get_pads_begin(), (CoordinateDiff{0, 1}));
ASSERT_EQ(conv->get_pads_end(), (CoordinateDiff{0, 1}));
}
TEST(type_prop, conv_v1_partial_data_shape_dynamic)

View File

@ -150,8 +150,8 @@ TEST(type_prop, deformable_conv_v1_partial_auto_padding_same_spatial_dims_dynami
group,
deformable_group);
ASSERT_TRUE(deformable_conv->get_output_partial_shape(0).same_scheme(
{1, 4, Dimension::dynamic(), Dimension::dynamic()}));
ASSERT_EQ(deformable_conv->get_pads_begin(), (CoordinateDiff{}));
ASSERT_EQ(deformable_conv->get_pads_end(), (CoordinateDiff{}));
ASSERT_TRUE(
deformable_conv->get_output_partial_shape(0).same_scheme({1, 4, Dimension::dynamic(), 5}));
ASSERT_EQ(deformable_conv->get_pads_begin(), (CoordinateDiff{0, 1}));
ASSERT_EQ(deformable_conv->get_pads_end(), (CoordinateDiff{0, 1}));
}

View File

@ -121,8 +121,7 @@ TEST(type_prop, group_conv_v1_partial_auto_padding_same_spatial_dims_dynamic)
auto conv = make_shared<op::v1::GroupConvolution>(
data_batch, filters, strides, pads_begin, pads_end, dilations, auto_pad);
ASSERT_TRUE(conv->get_output_partial_shape(0).same_scheme(
{1, 2, Dimension::dynamic(), Dimension::dynamic()}));
ASSERT_EQ(conv->get_pads_begin(), (CoordinateDiff{}));
ASSERT_EQ(conv->get_pads_end(), (CoordinateDiff{}));
ASSERT_TRUE(conv->get_output_partial_shape(0).same_scheme({1, 2, Dimension::dynamic(), 5}));
ASSERT_EQ(conv->get_pads_begin(), (CoordinateDiff{0, 1}));
ASSERT_EQ(conv->get_pads_end(), (CoordinateDiff{0, 1}));
}

View File

@ -94,10 +94,9 @@ TEST(type_prop, max_pool_auto_padding_spatial_dims_dynamic)
auto mp = make_shared<op::v1::MaxPool>(
arg, strides, pads_begin, pads_end, kernel_shape, rounding_mode, auto_pad);
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme(
{1, 3, Dimension::dynamic(), Dimension::dynamic()}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{}));
ASSERT_EQ(mp->get_pads_end(), (Shape{}));
ASSERT_TRUE(mp->get_output_partial_shape(0).same_scheme({1, 3, 32, Dimension::dynamic()}));
ASSERT_EQ(mp->get_pads_begin(), (Shape{1, 0}));
ASSERT_EQ(mp->get_pads_end(), (Shape{0, 0}));
}
TEST(type_prop, max_pool_default_values)