Files
openvino/tests/layer_tests/pytorch_tests/test_min_max.py
Maxim Vafin 53e699eaba Add PyTorch Frontend (#15069)
* WIP

* update input validation

* upsample_nearest2d and upsample_bilinear2d support

* support leaky_relu add test for inplace relu

* update tests, add handler for ListConstruct

* Do not create extra outputs in main body

* add positive case with non-default value

* update testing

* update test, handle non constant size and scale

* remove ie_device

* add aten::group_norm support

* refactoring

* Enable aten::reshape_as operator and add layer test

* more tests

* Fix typo in test

* Resolve conflicts

* fix code style

* expand init version

* expand_as and tests

* add transposed convolutions support

* add tests

* initial support pad

* add circular

* update for differenced in rang

* cleanup

* refactor

* more tests

* apply review comments

* Add split+listunpack transformation

* Add split+getitem transformation

* Add test cases

* fix typo

* Minor fixes

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Apply suggestions from code review

* Small fix

* Support converting models without freezing

* support BoolTensor and masked_fill

* add support aten::rsqrt and test for sqrt

* add cumsum and type_as

* support clamp

* support more matrix operations

* add tests

* Add aten::adaptive_avg_pool3d and layer test

* Change to rank

* fix code style in utils.hpp

* Update src/frontends/pytorch/src/op_table.cpp

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix code style

* add tests

* add xfail

* remove unnecessary broadcast

* Changes required by style formater

* aten::_convolution_mode

* Changes requested by a reviewer

* remove code duplication

* add aten::unbind transformation

* full, zeros and ones

* Support getattr list and unrolling nested ifs

* Remove line change

* Enable back freezing in layer tests

* Add aten::norm operator and layer test

* Small fix in layer test

* add aten::roll

* add empty line

* Typo fix

* fix style

* fix style v2

* add pytorch frontend to wheel

* Support all types of numeric norms

* add check for dynamic shapes

* remove random change

* merge statements

* add min and max ops support

* aten::max and aten::min

* move axes range creation to utils

* add transformation for tuple results, update tests

* fix copyright

* aten::var

* add test and translation for numel

* ignore aten::clone

* Add layer test for aten::add operator

* Fix typo

* Remove redundant import

* Add parameter name in forward method

* fix code style

* apply review comments

* Add size+slice+listunpack transform

* Add append listunpack transformation

* Register transformation

* aten::where

* update realization

* Fix issue with getitem

* Fix getitem

* Add layer test for aten::view operator

* Add tests for listunpack

* add test for aten::div

* fix style

* update aten::adaptive_max_pool2d

* fix style

* add aten::floor_divide

* aten::addmm support alpha and beta with different dtype

* nonzero

* Change test name

* update test cases to include other dtypes

* aten::arange

* prim::max transformation for ListConstruct

* rename op

* generalize conv2d implementation for conv1d and conv3d

* aten::unsqueeze_ and tests for aten::unsqueeze (#70)

* add aten::le, aten::ge and tests for other tensor comparision ops (#74)

* add support trigonometry ops (#73)

* support aten::upsample_bicubic2d, aten::ceil, aten::floor (#72)

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* extend and add tests for avg_pool and max_pool

* extend tests and constant filling ops

* fix as_tensor and full ops

* aten::repeat

* fix code style

* aten::im2col (#61)

* aten::im2col

* remove debug prints, add number of elements check

* fix failed tests

* move helper function

* use split

* Update src/frontends/pytorch/src/op/im2col.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* fix code style

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update src/frontends/pytorch/src/utils.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* fix code style

* revert removeinf floordiv, add floor_divide file

* Fix merge issue

* reduce code duplication

* refactor

* Add len operator with layer test

* update clamp to support mixed precision and add support torch.long for constants

* aten::selu

* add trunc mode to div

* add else statement

* Add test case to layer test

* Fix submodules (#88)

* update test file

* fix namings

* execute in fp64 and convert back to initial precision

* Revert set_output_size to master. Small fix in If validate

* Fix build and code style

* fix failed tests

* Add torchvision::nms operator and layer test

* Change requested by a reviewer

* Remove div test

* convert constants to input type

* Mark some cases in div tests as xfail (#93)

* Small refactoring (#94)

* Small refactoring

* Fix type

* Fix python codestyle

* Incremental fix code style (#95)

* Fix style (#96)

* Fix copyright

* Fix code style

* Branch clean up (#97)

* Optimize includes and force opset10 (#98)

* Optimize includes

* Force opset10 in pt fe

* Fix codestyle (#99)

* Fix style

* Fix clang codestyle

* Fix cerr with debug log

* Update src/bindings/python/src/pyopenvino/frontend/pytorch/decoder.cpp

* Add pytorch dependency only if pytorch frontend is enabled

* Update src/bindings/python/src/pyopenvino/CMakeLists.txt

* Add layer tests to precommit (#100)

* Add layer tests to precommit

* Remove accidentally added files

* Apply code style on layer tests

* batch norm tests and fixes

* move default weight and bias to else block

* reduce code duplication

* Changes requested by a reviewer

* Changes requested by a reviewer

* Remove dependency from pytorch in pyopenvino (#102)

* Remove dependency from pytorch when fe is disabled

* Change docstring

* Remove pytorch FE dependency from pyopenvino

* Apply codestyle (#107)

* Apply codestyle

* Remove commented line

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix mock FE test (#108)

* Fix mock PE test (#111)

* Revert changes in StridedSlice (#114)

* Small refactoring (#116)

* Small refactoring

* Fix codestyle

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

* Update src/frontends/pytorch/src/op/group_norm.cpp

* Fix cmake copyright define (#117)

* Update src/frontends/pytorch/src/op/arange.cpp

* Apply suggestions from code review

* Update build configs (#120)

* FIx build configs

* Update type cast in full.cpp

* Apply review feedback (#121)

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue after master merge (#122)

* Fix issue after master merge

* Fix build

Co-authored-by: eaidova <ekaterina.aidova@intel.com>
Co-authored-by: bszmelcz <bartosz.szmelczynski@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: sikorsl1 <leonard.sikorski@intel.com>
Co-authored-by: Leonard Sikorski <l.sikorski123@gmail.com>
Co-authored-by: Mateusz <mateusz.mikolajczyk@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-01-18 18:16:57 +04:00

139 lines
5.2 KiB
Python

# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import pytest
from pytorch_layer_test_class import PytorchLayerTest
class TestMinMax(PytorchLayerTest):
def _prepare_input(self, second_input=False):
import numpy as np
if not second_input:
return (np.random.randn(1, 3, 10, 10).astype(np.float32),)
return (np.random.randn(1, 3, 10, 10).astype(np.float32), np.random.randn(1, 3, 10, 10).astype(np.float32))
def create_model(self, op_type, axes, keep_dims, single_input=True):
import torch
op_types = {
'max': torch.max,
'min': torch.min
}
op = op_types[op_type]
class aten_min_max(torch.nn.Module):
def __init__(self, op):
super(aten_min_max, self).__init__()
self.op = op
def forward(self, x):
return self.op(x)
class aten_min_max_3args(torch.nn.Module):
def __init__(self, op, axes=None, keep_dims=None):
super(aten_min_max_3args, self).__init__()
self.op = op
self.axes = axes
self.keep_dims = keep_dims
def forward(self, x):
return self.op(x, self.axes, self.keep_dims)
class aten_min_max_2args(torch.nn.Module):
def __init__(self, op):
super(aten_min_max_2args, self).__init__()
self.op = op
def forward(self, x, y):
return self.op(x, y)
ref_net = None
if axes is None and keep_dims is None:
model_cls = aten_min_max(
op) if single_input else aten_min_max_2args(op)
else:
model_cls = aten_min_max_3args(op, axes, keep_dims)
return model_cls, ref_net, f"aten::{op_type}"
@pytest.mark.parametrize("axes,keep_dims", [(None, None), (1, False), (1, True), (-1, False), (-1, True)])
@pytest.mark.parametrize("op_type", ['min', 'max'])
@pytest.mark.nightly
@pytest.mark.precommit
def test_reduce_min_max(self, axes, keep_dims, op_type, ie_device, precision, ir_version):
self._test(*self.create_model(op_type, axes, keep_dims,
single_input=True), ie_device, precision, ir_version)
@pytest.mark.parametrize("op_type", ['min', 'max'])
@pytest.mark.nightly
@pytest.mark.precommit
def test_min_max(self, op_type, ie_device, precision, ir_version):
self._test(*self.create_model(op_type, None, None, single_input=False),
ie_device, precision, ir_version, kwargs_to_prepare_input={"second_input": True})
class TestPrimMax(PytorchLayerTest):
def _prepare_input(self, first_input, second_input, dtype="float"):
import numpy as np
first_array = np.array(first_input).astype(dtype)
if not second_input:
return (first_array,)
second_array = np.array(second_input).astype(dtype)
return (first_array, second_array)
def create_model(self, case):
import torch
class prim_max_2_values(torch.nn.Module):
def forward(self, x: float, y: float):
return max(x, y)
class prim_max_2_list_values(torch.nn.Module):
def forward(self, x: float, y: float):
return max([x, x + y], [y, y - x])
class prim_max_1list_several_values(torch.nn.Module):
def forward(self, x: float, y: float):
return max([x, y, x + y])
class prim_max_one_value(torch.nn.Module):
def forward(self, x: float, y: float):
return max(x)
cases = {
"2_values": prim_max_2_values,
"2_list_values": prim_max_2_list_values,
"list_several_values": prim_max_1list_several_values,
"one_value": prim_max_one_value
}
model_cls = cases[case]()
ref_net = None
return model_cls, ref_net, f"prim::max"
@pytest.mark.parametrize("case", ["2_values", "2_list_values", "list_several_values", "one_value"])
@pytest.mark.parametrize("kwargs_to_prepare_input", [
{"first_input": 0, "second_input": 1, "dtype": "float"},
{"first_input": 1, "second_input": 1, "dtype": "float"},
{"first_input": 2, "second_input": 1, "dtype": "float"},
{"first_input": 0, "second_input": 1, "dtype": "int"},
{"first_input": 1, "second_input": 1, "dtype": "int"},
{"first_input": 2, "second_input": 1, "dtype": "int"},
# is not supported by OV
pytest.param({"first_input": 0, "second_input": 1,
"dtype": "bool"}, marks=pytest.mark.xfail),
pytest.param({"first_input": 1, "second_input": 1,
"dtype": "bool"}, marks=pytest.mark.xfail),
pytest.param({"first_input": 2, "second_input": 1,
"dtype": "bool"}, marks=pytest.mark.xfail),
])
@pytest.mark.nightly
@pytest.mark.precommit
def test_min_max(self, case, kwargs_to_prepare_input, ie_device, precision, ir_version):
self._test(*self.create_model(case),
ie_device, precision, ir_version, kwargs_to_prepare_input=kwargs_to_prepare_input)