Files
openvino/tests/layer_tests/pytorch_tests/test_convolution_mode.py
Maxim Vafin 53e699eaba Add PyTorch Frontend (#15069)
* WIP

* update input validation

* upsample_nearest2d and upsample_bilinear2d support

* support leaky_relu add test for inplace relu

* update tests, add handler for ListConstruct

* Do not create extra outputs in main body

* add positive case with non-default value

* update testing

* update test, handle non constant size and scale

* remove ie_device

* add aten::group_norm support

* refactoring

* Enable aten::reshape_as operator and add layer test

* more tests

* Fix typo in test

* Resolve conflicts

* fix code style

* expand init version

* expand_as and tests

* add transposed convolutions support

* add tests

* initial support pad

* add circular

* update for differenced in rang

* cleanup

* refactor

* more tests

* apply review comments

* Add split+listunpack transformation

* Add split+getitem transformation

* Add test cases

* fix typo

* Minor fixes

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Apply suggestions from code review

* Small fix

* Support converting models without freezing

* support BoolTensor and masked_fill

* add support aten::rsqrt and test for sqrt

* add cumsum and type_as

* support clamp

* support more matrix operations

* add tests

* Add aten::adaptive_avg_pool3d and layer test

* Change to rank

* fix code style in utils.hpp

* Update src/frontends/pytorch/src/op_table.cpp

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix code style

* add tests

* add xfail

* remove unnecessary broadcast

* Changes required by style formater

* aten::_convolution_mode

* Changes requested by a reviewer

* remove code duplication

* add aten::unbind transformation

* full, zeros and ones

* Support getattr list and unrolling nested ifs

* Remove line change

* Enable back freezing in layer tests

* Add aten::norm operator and layer test

* Small fix in layer test

* add aten::roll

* add empty line

* Typo fix

* fix style

* fix style v2

* add pytorch frontend to wheel

* Support all types of numeric norms

* add check for dynamic shapes

* remove random change

* merge statements

* add min and max ops support

* aten::max and aten::min

* move axes range creation to utils

* add transformation for tuple results, update tests

* fix copyright

* aten::var

* add test and translation for numel

* ignore aten::clone

* Add layer test for aten::add operator

* Fix typo

* Remove redundant import

* Add parameter name in forward method

* fix code style

* apply review comments

* Add size+slice+listunpack transform

* Add append listunpack transformation

* Register transformation

* aten::where

* update realization

* Fix issue with getitem

* Fix getitem

* Add layer test for aten::view operator

* Add tests for listunpack

* add test for aten::div

* fix style

* update aten::adaptive_max_pool2d

* fix style

* add aten::floor_divide

* aten::addmm support alpha and beta with different dtype

* nonzero

* Change test name

* update test cases to include other dtypes

* aten::arange

* prim::max transformation for ListConstruct

* rename op

* generalize conv2d implementation for conv1d and conv3d

* aten::unsqueeze_ and tests for aten::unsqueeze (#70)

* add aten::le, aten::ge and tests for other tensor comparision ops (#74)

* add support trigonometry ops (#73)

* support aten::upsample_bicubic2d, aten::ceil, aten::floor (#72)

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* extend and add tests for avg_pool and max_pool

* extend tests and constant filling ops

* fix as_tensor and full ops

* aten::repeat

* fix code style

* aten::im2col (#61)

* aten::im2col

* remove debug prints, add number of elements check

* fix failed tests

* move helper function

* use split

* Update src/frontends/pytorch/src/op/im2col.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* fix code style

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update src/frontends/pytorch/src/utils.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* fix code style

* revert removeinf floordiv, add floor_divide file

* Fix merge issue

* reduce code duplication

* refactor

* Add len operator with layer test

* update clamp to support mixed precision and add support torch.long for constants

* aten::selu

* add trunc mode to div

* add else statement

* Add test case to layer test

* Fix submodules (#88)

* update test file

* fix namings

* execute in fp64 and convert back to initial precision

* Revert set_output_size to master. Small fix in If validate

* Fix build and code style

* fix failed tests

* Add torchvision::nms operator and layer test

* Change requested by a reviewer

* Remove div test

* convert constants to input type

* Mark some cases in div tests as xfail (#93)

* Small refactoring (#94)

* Small refactoring

* Fix type

* Fix python codestyle

* Incremental fix code style (#95)

* Fix style (#96)

* Fix copyright

* Fix code style

* Branch clean up (#97)

* Optimize includes and force opset10 (#98)

* Optimize includes

* Force opset10 in pt fe

* Fix codestyle (#99)

* Fix style

* Fix clang codestyle

* Fix cerr with debug log

* Update src/bindings/python/src/pyopenvino/frontend/pytorch/decoder.cpp

* Add pytorch dependency only if pytorch frontend is enabled

* Update src/bindings/python/src/pyopenvino/CMakeLists.txt

* Add layer tests to precommit (#100)

* Add layer tests to precommit

* Remove accidentally added files

* Apply code style on layer tests

* batch norm tests and fixes

* move default weight and bias to else block

* reduce code duplication

* Changes requested by a reviewer

* Changes requested by a reviewer

* Remove dependency from pytorch in pyopenvino (#102)

* Remove dependency from pytorch when fe is disabled

* Change docstring

* Remove pytorch FE dependency from pyopenvino

* Apply codestyle (#107)

* Apply codestyle

* Remove commented line

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix mock FE test (#108)

* Fix mock PE test (#111)

* Revert changes in StridedSlice (#114)

* Small refactoring (#116)

* Small refactoring

* Fix codestyle

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

* Update src/frontends/pytorch/src/op/group_norm.cpp

* Fix cmake copyright define (#117)

* Update src/frontends/pytorch/src/op/arange.cpp

* Apply suggestions from code review

* Update build configs (#120)

* FIx build configs

* Update type cast in full.cpp

* Apply review feedback (#121)

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue after master merge (#122)

* Fix issue after master merge

* Fix build

Co-authored-by: eaidova <ekaterina.aidova@intel.com>
Co-authored-by: bszmelcz <bartosz.szmelczynski@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: sikorsl1 <leonard.sikorski@intel.com>
Co-authored-by: Leonard Sikorski <l.sikorski123@gmail.com>
Co-authored-by: Mateusz <mateusz.mikolajczyk@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-01-18 18:16:57 +04:00

139 lines
8.4 KiB
Python

# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import pytest
from pytorch_layer_test_class import PytorchLayerTest
class TestConv2D(PytorchLayerTest):
def _prepare_input(self, ndim=4):
import numpy as np
input_shape = (1, 3, 10, 10, 10)
return (np.random.randn(*input_shape[:ndim]).astype(np.float32),)
def create_model(self, weights_shape, strides, pads, dilations, groups, bias):
import torch
class aten_convolution_mode(torch.nn.Module):
def __init__(self):
super(aten_convolution_mode, self).__init__()
self.weight = torch.randn(weights_shape)
self.bias = None
if bias:
self.bias = torch.randn(weights_shape[0])
self.strides = strides
self.pads = pads
self.dilations = dilations
self.groups = groups
def forward(self, x):
return torch._convolution_mode(x, self.weight, self.bias, self.strides, self.pads, self.dilations,
self.groups)
ref_net = None
return aten_convolution_mode(), ref_net, "aten::_convolution_mode"
@pytest.mark.parametrize("params",
[
{'weights_shape': [1, 3, 3], 'strides': [1], 'pads': 'same', 'dilations': [1],
'groups': 1},
{'weights_shape': [1, 3, 3], 'strides': [1], 'pads': 'valid', 'dilations': [1],
'groups': 1},
{'weights_shape': [1, 3, 3], 'strides': [1], 'pads': 'same', 'dilations': [2],
'groups': 1},
{'weights_shape': [1, 3, 3], 'strides': [1], 'pads': 'valid', 'dilations': [2],
'groups': 1},
{'weights_shape': [3, 1, 1], 'strides': [1], 'pads': 'same', 'dilations': [1],
'groups': 3},
{'weights_shape': [3, 1, 1], 'strides': [1], 'pads': 'valid', 'dilations': [1],
'groups': 3},
{'weights_shape': [1, 3, 3], 'strides': [2], 'pads': 'valid', 'dilations': [1],
'groups': 1},
{'weights_shape': [1, 3, 3], 'strides': [2], 'pads': 'valid', 'dilations': [2],
'groups': 1},
{'weights_shape': [3, 1, 1], 'strides': [1], 'pads': 'same', 'dilations': [2],
'groups': 3},
{'weights_shape': [3, 1, 1], 'strides': [1], 'pads': 'valid', 'dilations': [2],
'groups': 3},
])
@pytest.mark.parametrize("bias", [True, False])
@pytest.mark.nightly
@pytest.mark.precommit
def test_convolution_mode_1d(self, params, bias, ie_device, precision, ir_version):
self._test(*self.create_model(**params, bias=bias),
ie_device, precision, ir_version, dynamic_shapes=params['groups'] == 1,
kwargs_to_prepare_input={'ndim': 3})
@pytest.mark.parametrize("params",
[
{'weights_shape': [1, 3, 3, 3], 'strides': [1, 1], 'pads': 'same', 'dilations': [1, 1],
'groups': 1},
{'weights_shape': [1, 3, 3, 3], 'strides': [1, 1], 'pads': 'valid',
'dilations': [1, 1], 'groups': 1},
{'weights_shape': [1, 3, 3, 3], 'strides': [1, 1], 'pads': 'same', 'dilations': [2, 2],
'groups': 1},
{'weights_shape': [1, 3, 3, 3], 'strides': [1, 1], 'pads': 'valid',
'dilations': [2, 2], 'groups': 1},
{'weights_shape': [3, 1, 1, 1], 'strides': [1, 1], 'pads': 'same', 'dilations': [1, 1],
'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [1, 1], 'pads': 'valid',
'dilations': [1, 1], 'groups': 3},
{'weights_shape': [1, 3, 3, 3], 'strides': [2, 2], 'pads': 'valid',
'dilations': [1, 1], 'groups': 1},
{'weights_shape': [1, 3, 3, 3], 'strides': [2, 2], 'pads': 'valid',
'dilations': [2, 2], 'groups': 1},
{'weights_shape': [1, 3, 3, 3], 'strides': [2, 1], 'pads': 'valid',
'dilations': [1, 1], 'groups': 1},
{'weights_shape': [3, 1, 1, 1], 'strides': [2, 2], 'pads': 'valid',
'dilations': [1, 1], 'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [2, 2], 'pads': 'valid',
'dilations': [2, 2], 'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [2, 1], 'pads': 'valid',
'dilations': [1, 1], 'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [1, 1], 'pads': 'same', 'dilations': [2, 1],
'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [1, 1], 'pads': 'valid',
'dilations': [2, 1], 'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [1, 1], 'pads': 'same', 'dilations': [2, 2],
'groups': 3},
{'weights_shape': [3, 1, 1, 1], 'strides': [1, 1], 'pads': 'valid',
'dilations': [2, 2], 'groups': 3},
])
@pytest.mark.parametrize("bias", [True, False])
@pytest.mark.nightly
@pytest.mark.precommit
def test_convolution_mode_2d(self, params, bias, ie_device, precision, ir_version):
self._test(*self.create_model(**params, bias=bias),
ie_device, precision, ir_version, dynamic_shapes=params['groups'] == 1)
@pytest.mark.parametrize("params",
[
{'weights_shape': [1, 3, 3, 3, 3], 'strides': [1, 1, 1], 'pads': 'same',
'dilations': [1, 1, 1], 'groups': 1},
{'weights_shape': [1, 3, 3, 3, 3], 'strides': [1, 1, 1], 'pads': 'valid',
'dilations': [1, 1, 1], 'groups': 1},
{'weights_shape': [3, 1, 1, 1, 1], 'strides': [1, 1, 1], 'pads': 'same',
'dilations': [1, 1, 1], 'groups': 3},
{'weights_shape': [3, 1, 1, 1, 1], 'strides': [1, 1, 1], 'pads': 'valid',
'dilations': [1, 1, 1], 'groups': 3},
{'weights_shape': [1, 3, 3, 3, 3], 'strides': [2, 2, 1], 'pads': 'valid',
'dilations': [1, 1, 1], 'groups': 1},
{'weights_shape': [1, 3, 3, 3, 3], 'strides': [2, 2, 2], 'pads': 'valid',
'dilations': [1, 1, 1], 'groups': 1},
{'weights_shape': [1, 3, 3, 3, 3], 'strides': [2, 2, 2], 'pads': 'valid',
'dilations': [2, 2, 2], 'groups': 1},
{'weights_shape': [3, 1, 1, 1, 1], 'strides': [1, 1, 1], 'pads': 'same',
'dilations': [2, 1, 2], 'groups': 3},
{'weights_shape': [3, 1, 1, 1, 1], 'strides': [1, 1, 1], 'pads': 'valid',
'dilations': [2, 1, 2], 'groups': 3},
])
@pytest.mark.parametrize("bias", [True, False])
@pytest.mark.nightly
@pytest.mark.precommit
def test_convolution_mode_3d(self, params, bias, ie_device, precision, ir_version):
self._test(*self.create_model(**params, bias=bias),
ie_device, precision, ir_version, dynamic_shapes=params['groups'] == 1,
kwargs_to_prepare_input={'ndim': 5})