* WIP * update input validation * upsample_nearest2d and upsample_bilinear2d support * support leaky_relu add test for inplace relu * update tests, add handler for ListConstruct * Do not create extra outputs in main body * add positive case with non-default value * update testing * update test, handle non constant size and scale * remove ie_device * add aten::group_norm support * refactoring * Enable aten::reshape_as operator and add layer test * more tests * Fix typo in test * Resolve conflicts * fix code style * expand init version * expand_as and tests * add transposed convolutions support * add tests * initial support pad * add circular * update for differenced in rang * cleanup * refactor * more tests * apply review comments * Add split+listunpack transformation * Add split+getitem transformation * Add test cases * fix typo * Minor fixes * Apply suggestions from code review Co-authored-by: Maxim Vafin <maxim.vafin@intel.com> * Apply suggestions from code review * Small fix * Support converting models without freezing * support BoolTensor and masked_fill * add support aten::rsqrt and test for sqrt * add cumsum and type_as * support clamp * support more matrix operations * add tests * Add aten::adaptive_avg_pool3d and layer test * Change to rank * fix code style in utils.hpp * Update src/frontends/pytorch/src/op_table.cpp Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com> * fix code style * add tests * add xfail * remove unnecessary broadcast * Changes required by style formater * aten::_convolution_mode * Changes requested by a reviewer * remove code duplication * add aten::unbind transformation * full, zeros and ones * Support getattr list and unrolling nested ifs * Remove line change * Enable back freezing in layer tests * Add aten::norm operator and layer test * Small fix in layer test * add aten::roll * add empty line * Typo fix * fix style * fix style v2 * add pytorch frontend to wheel * Support all types of numeric norms * add check for dynamic shapes * remove random change * merge statements * add min and max ops support * aten::max and aten::min * move axes range creation to utils * add transformation for tuple results, update tests * fix copyright * aten::var * add test and translation for numel * ignore aten::clone * Add layer test for aten::add operator * Fix typo * Remove redundant import * Add parameter name in forward method * fix code style * apply review comments * Add size+slice+listunpack transform * Add append listunpack transformation * Register transformation * aten::where * update realization * Fix issue with getitem * Fix getitem * Add layer test for aten::view operator * Add tests for listunpack * add test for aten::div * fix style * update aten::adaptive_max_pool2d * fix style * add aten::floor_divide * aten::addmm support alpha and beta with different dtype * nonzero * Change test name * update test cases to include other dtypes * aten::arange * prim::max transformation for ListConstruct * rename op * generalize conv2d implementation for conv1d and conv3d * aten::unsqueeze_ and tests for aten::unsqueeze (#70) * add aten::le, aten::ge and tests for other tensor comparision ops (#74) * add support trigonometry ops (#73) * support aten::upsample_bicubic2d, aten::ceil, aten::floor (#72) Co-authored-by: Maxim Vafin <maxim.vafin@intel.com> * extend and add tests for avg_pool and max_pool * extend tests and constant filling ops * fix as_tensor and full ops * aten::repeat * fix code style * aten::im2col (#61) * aten::im2col * remove debug prints, add number of elements check * fix failed tests * move helper function * use split * Update src/frontends/pytorch/src/op/im2col.cpp Co-authored-by: Maxim Vafin <maxim.vafin@intel.com> * fix code style Co-authored-by: Maxim Vafin <maxim.vafin@intel.com> * Update src/frontends/pytorch/src/utils.cpp Co-authored-by: Maxim Vafin <maxim.vafin@intel.com> * fix code style * revert removeinf floordiv, add floor_divide file * Fix merge issue * reduce code duplication * refactor * Add len operator with layer test * update clamp to support mixed precision and add support torch.long for constants * aten::selu * add trunc mode to div * add else statement * Add test case to layer test * Fix submodules (#88) * update test file * fix namings * execute in fp64 and convert back to initial precision * Revert set_output_size to master. Small fix in If validate * Fix build and code style * fix failed tests * Add torchvision::nms operator and layer test * Change requested by a reviewer * Remove div test * convert constants to input type * Mark some cases in div tests as xfail (#93) * Small refactoring (#94) * Small refactoring * Fix type * Fix python codestyle * Incremental fix code style (#95) * Fix style (#96) * Fix copyright * Fix code style * Branch clean up (#97) * Optimize includes and force opset10 (#98) * Optimize includes * Force opset10 in pt fe * Fix codestyle (#99) * Fix style * Fix clang codestyle * Fix cerr with debug log * Update src/bindings/python/src/pyopenvino/frontend/pytorch/decoder.cpp * Add pytorch dependency only if pytorch frontend is enabled * Update src/bindings/python/src/pyopenvino/CMakeLists.txt * Add layer tests to precommit (#100) * Add layer tests to precommit * Remove accidentally added files * Apply code style on layer tests * batch norm tests and fixes * move default weight and bias to else block * reduce code duplication * Changes requested by a reviewer * Changes requested by a reviewer * Remove dependency from pytorch in pyopenvino (#102) * Remove dependency from pytorch when fe is disabled * Change docstring * Remove pytorch FE dependency from pyopenvino * Apply codestyle (#107) * Apply codestyle * Remove commented line * Apply suggestions from code review Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> * Fix mock FE test (#108) * Fix mock PE test (#111) * Revert changes in StridedSlice (#114) * Small refactoring (#116) * Small refactoring * Fix codestyle * Apply suggestions from code review Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> * Apply suggestions from code review * Update src/frontends/pytorch/src/op/group_norm.cpp * Fix cmake copyright define (#117) * Update src/frontends/pytorch/src/op/arange.cpp * Apply suggestions from code review * Update build configs (#120) * FIx build configs * Update type cast in full.cpp * Apply review feedback (#121) * Apply suggestions from code review * Apply suggestions from code review Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> * Fix issue after master merge (#122) * Fix issue after master merge * Fix build Co-authored-by: eaidova <ekaterina.aidova@intel.com> Co-authored-by: bszmelcz <bartosz.szmelczynski@intel.com> Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com> Co-authored-by: sikorsl1 <leonard.sikorski@intel.com> Co-authored-by: Leonard Sikorski <l.sikorski123@gmail.com> Co-authored-by: Mateusz <mateusz.mikolajczyk@intel.com> Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
159 lines
7.6 KiB
Python
159 lines
7.6 KiB
Python
# Copyright (C) 2018-2023 Intel Corporation
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
import pytest
|
|
|
|
from pytorch_layer_test_class import PytorchLayerTest
|
|
|
|
d2_avg_params = [{'kernel_size': [3, 3], 'stride': 1, 'padding': 0},
|
|
{'kernel_size': [3, 3], 'stride': [1, 1], 'padding': 1},
|
|
{'kernel_size': [3, 3], 'stride': [1, 1], 'padding': [0, 1]},
|
|
{'kernel_size': [3, 3], 'stride': [1, 1], 'padding': [1, 0]},
|
|
{'kernel_size': [3, 3], 'stride': [2, 1], 'padding': 0},
|
|
{'kernel_size': [2, 1], 'stride': [2, 1], 'padding': 0},
|
|
]
|
|
|
|
d1_avg_params = [{'kernel_size': 3, 'stride': 1, 'padding': 0},
|
|
{'kernel_size': (4,), 'stride': 1, 'padding': 1},
|
|
{'kernel_size': 4, 'stride': (5,), 'padding': 2},
|
|
]
|
|
d3_avg_params = [{'kernel_size': [3, 3, 3], 'stride': 1, 'padding': 0},
|
|
{'kernel_size': [3, 3, 3], 'stride': [1, 1, 1], 'padding': 1},
|
|
{'kernel_size': [3, 3, 3], 'stride': [3, 3, 3], 'padding': [0, 0, 0]},
|
|
{'kernel_size': [3, 2, 1], 'stride': [3, 1, 1], 'padding': [0, 0, 0]},
|
|
]
|
|
|
|
|
|
class TestPooling(PytorchLayerTest):
|
|
def _prepare_input(self, ndim=4):
|
|
import numpy as np
|
|
shape = (1, 3, 15, 15, 15)
|
|
return (np.random.randn(*shape[:ndim]).astype(np.float32),)
|
|
|
|
def create_model(self, op_type, kernel_size, stride, padding, dilation=1, ceil_mode=True, count_include_pad=True):
|
|
import torch
|
|
|
|
class aten_avg_pooling_base(torch.nn.Module):
|
|
def __init__(self):
|
|
super(aten_avg_pooling_base, self).__init__()
|
|
self.kernel_size = kernel_size
|
|
self.stride = stride
|
|
self.padding = padding
|
|
self.ceil_mode = ceil_mode
|
|
self.count_include_pad = count_include_pad
|
|
|
|
def forward(self, x):
|
|
pass
|
|
|
|
class aten_max_pooling_base(torch.nn.Module):
|
|
def __init__(self):
|
|
super(aten_max_pooling_base, self).__init__()
|
|
self.kernel_size = kernel_size
|
|
self.stride = stride
|
|
self.padding = padding
|
|
self.dilation = dilation
|
|
self.ceil_mode = ceil_mode
|
|
|
|
def forward(self, x):
|
|
pass
|
|
|
|
class aten_avg_pool2d(aten_avg_pooling_base):
|
|
def forward(self, x):
|
|
return torch.nn.functional.avg_pool2d(x, self.kernel_size, self.stride, self.padding, self.ceil_mode,
|
|
self.count_include_pad)
|
|
|
|
class aten_avg_pool3d(aten_avg_pooling_base):
|
|
def forward(self, x):
|
|
return torch.nn.functional.avg_pool3d(x, self.kernel_size, self.stride, self.padding, self.ceil_mode,
|
|
self.count_include_pad)
|
|
|
|
class aten_avg_pool1d(aten_avg_pooling_base):
|
|
def forward(self, x):
|
|
return torch.nn.functional.avg_pool1d(x, self.kernel_size, self.stride, self.padding, self.ceil_mode,
|
|
self.count_include_pad)
|
|
|
|
class aten_max_pool2d(aten_max_pooling_base):
|
|
def forward(self, x):
|
|
return torch.nn.functional.max_pool2d(x, self.kernel_size, self.stride, self.padding, self.dilation,
|
|
self.ceil_mode)
|
|
|
|
class aten_max_pool3d(aten_max_pooling_base):
|
|
def forward(self, x):
|
|
return torch.nn.functional.max_pool3d(x, self.kernel_size, self.stride, self.padding, self.dilation,
|
|
self.ceil_mode)
|
|
|
|
class aten_max_pool1d(aten_max_pooling_base):
|
|
def forward(self, x):
|
|
return torch.nn.functional.max_pool1d(x, self.kernel_size, self.stride, self.padding, self.dilation,
|
|
self.ceil_mode)
|
|
|
|
ops = {
|
|
"max_pool1d": aten_max_pool1d,
|
|
"max_pool2d": aten_max_pool2d,
|
|
"max_pool3d": aten_max_pool3d,
|
|
"avg_pool1d": aten_avg_pool1d,
|
|
"avg_pool2d": aten_avg_pool2d,
|
|
"avg_pool3d": aten_avg_pool3d
|
|
}
|
|
|
|
ref_net = None
|
|
aten_pooling = ops[op_type]
|
|
|
|
return aten_pooling(), ref_net, f"aten::{op_type}"
|
|
|
|
@pytest.mark.parametrize("params", d1_avg_params)
|
|
@pytest.mark.parametrize("ceil_mode", [True, False])
|
|
@pytest.mark.parametrize("count_include_pad", [True, False])
|
|
@pytest.mark.nightly
|
|
@pytest.mark.precommit
|
|
def test_avg_pool1d(self, params, ceil_mode, count_include_pad, ie_device, precision, ir_version):
|
|
self._test(*self.create_model("avg_pool1d", **params, ceil_mode=ceil_mode, count_include_pad=count_include_pad),
|
|
ie_device, precision, ir_version, kwargs_to_prepare_input={'ndim': 3}, trace_model=True,
|
|
dynamic_shapes=False)
|
|
|
|
@pytest.mark.parametrize("params", d2_avg_params)
|
|
@pytest.mark.parametrize("ceil_mode", [True, False])
|
|
@pytest.mark.parametrize("count_include_pad", [True, False])
|
|
@pytest.mark.nightly
|
|
@pytest.mark.precommit
|
|
def test_avg_pool2d(self, params, ceil_mode, count_include_pad, ie_device, precision, ir_version):
|
|
self._test(*self.create_model("avg_pool2d", **params, ceil_mode=ceil_mode, count_include_pad=count_include_pad),
|
|
ie_device, precision, ir_version, trace_model=True, dynamic_shapes=False)
|
|
|
|
@pytest.mark.parametrize("params", d3_avg_params)
|
|
@pytest.mark.parametrize("ceil_mode", [True, False])
|
|
@pytest.mark.parametrize("count_include_pad", [True, False])
|
|
@pytest.mark.nightly
|
|
@pytest.mark.precommit
|
|
def test_avg_pool3d(self, params, ceil_mode, count_include_pad, ie_device, precision, ir_version):
|
|
self._test(*self.create_model("avg_pool3d", **params, ceil_mode=ceil_mode, count_include_pad=count_include_pad),
|
|
ie_device, precision, ir_version, kwargs_to_prepare_input={'ndim': 5}, trace_model=True,
|
|
dynamic_shapes=False)
|
|
|
|
@pytest.mark.parametrize("params", d1_avg_params)
|
|
@pytest.mark.parametrize("ceil_mode", [True, False])
|
|
@pytest.mark.parametrize("dilation", [1, 2])
|
|
@pytest.mark.nightly
|
|
@pytest.mark.precommit
|
|
def test_max_pool1d(self, params, ceil_mode, dilation, ie_device, precision, ir_version):
|
|
self._test(*self.create_model("max_pool1d", **params, ceil_mode=ceil_mode, dilation=dilation),
|
|
ie_device, precision, ir_version, kwargs_to_prepare_input={'ndim': 3}, dynamic_shapes=False)
|
|
|
|
@pytest.mark.parametrize("params", d2_avg_params)
|
|
@pytest.mark.parametrize("ceil_mode", [True, False])
|
|
@pytest.mark.parametrize("dilation", [1, 2])
|
|
@pytest.mark.nightly
|
|
@pytest.mark.precommit
|
|
def test_max_pool2d(self, params, ceil_mode, dilation, ie_device, precision, ir_version):
|
|
self._test(*self.create_model("max_pool2d", **params, ceil_mode=ceil_mode, dilation=dilation),
|
|
ie_device, precision, ir_version, dynamic_shapes=False)
|
|
|
|
@pytest.mark.parametrize("params", d3_avg_params)
|
|
@pytest.mark.parametrize("ceil_mode", [True, False])
|
|
@pytest.mark.parametrize("dilation", [1, 2])
|
|
@pytest.mark.nightly
|
|
@pytest.mark.precommit
|
|
def test_max_pool3d(self, params, ceil_mode, dilation, ie_device, precision, ir_version):
|
|
self._test(*self.create_model("max_pool3d", **params, ceil_mode=ceil_mode, dilation=dilation),
|
|
ie_device, precision, ir_version, kwargs_to_prepare_input={'ndim': 5}, dynamic_shapes=False)
|