Add PyTorch Frontend (#15069)

* WIP

* update input validation

* upsample_nearest2d and upsample_bilinear2d support

* support leaky_relu add test for inplace relu

* update tests, add handler for ListConstruct

* Do not create extra outputs in main body

* add positive case with non-default value

* update testing

* update test, handle non constant size and scale

* remove ie_device

* add aten::group_norm support

* refactoring

* Enable aten::reshape_as operator and add layer test

* more tests

* Fix typo in test

* Resolve conflicts

* fix code style

* expand init version

* expand_as and tests

* add transposed convolutions support

* add tests

* initial support pad

* add circular

* update for differenced in rang

* cleanup

* refactor

* more tests

* apply review comments

* Add split+listunpack transformation

* Add split+getitem transformation

* Add test cases

* fix typo

* Minor fixes

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Apply suggestions from code review

* Small fix

* Support converting models without freezing

* support BoolTensor and masked_fill

* add support aten::rsqrt and test for sqrt

* add cumsum and type_as

* support clamp

* support more matrix operations

* add tests

* Add aten::adaptive_avg_pool3d and layer test

* Change to rank

* fix code style in utils.hpp

* Update src/frontends/pytorch/src/op_table.cpp

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* fix code style

* add tests

* add xfail

* remove unnecessary broadcast

* Changes required by style formater

* aten::_convolution_mode

* Changes requested by a reviewer

* remove code duplication

* add aten::unbind transformation

* full, zeros and ones

* Support getattr list and unrolling nested ifs

* Remove line change

* Enable back freezing in layer tests

* Add aten::norm operator and layer test

* Small fix in layer test

* add aten::roll

* add empty line

* Typo fix

* fix style

* fix style v2

* add pytorch frontend to wheel

* Support all types of numeric norms

* add check for dynamic shapes

* remove random change

* merge statements

* add min and max ops support

* aten::max and aten::min

* move axes range creation to utils

* add transformation for tuple results, update tests

* fix copyright

* aten::var

* add test and translation for numel

* ignore aten::clone

* Add layer test for aten::add operator

* Fix typo

* Remove redundant import

* Add parameter name in forward method

* fix code style

* apply review comments

* Add size+slice+listunpack transform

* Add append listunpack transformation

* Register transformation

* aten::where

* update realization

* Fix issue with getitem

* Fix getitem

* Add layer test for aten::view operator

* Add tests for listunpack

* add test for aten::div

* fix style

* update aten::adaptive_max_pool2d

* fix style

* add aten::floor_divide

* aten::addmm support alpha and beta with different dtype

* nonzero

* Change test name

* update test cases to include other dtypes

* aten::arange

* prim::max transformation for ListConstruct

* rename op

* generalize conv2d implementation for conv1d and conv3d

* aten::unsqueeze_ and tests for aten::unsqueeze (#70)

* add aten::le, aten::ge and tests for other tensor comparision ops (#74)

* add support trigonometry ops (#73)

* support aten::upsample_bicubic2d, aten::ceil, aten::floor (#72)

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* extend and add tests for avg_pool and max_pool

* extend tests and constant filling ops

* fix as_tensor and full ops

* aten::repeat

* fix code style

* aten::im2col (#61)

* aten::im2col

* remove debug prints, add number of elements check

* fix failed tests

* move helper function

* use split

* Update src/frontends/pytorch/src/op/im2col.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* fix code style

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update src/frontends/pytorch/src/utils.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* fix code style

* revert removeinf floordiv, add floor_divide file

* Fix merge issue

* reduce code duplication

* refactor

* Add len operator with layer test

* update clamp to support mixed precision and add support torch.long for constants

* aten::selu

* add trunc mode to div

* add else statement

* Add test case to layer test

* Fix submodules (#88)

* update test file

* fix namings

* execute in fp64 and convert back to initial precision

* Revert set_output_size to master. Small fix in If validate

* Fix build and code style

* fix failed tests

* Add torchvision::nms operator and layer test

* Change requested by a reviewer

* Remove div test

* convert constants to input type

* Mark some cases in div tests as xfail (#93)

* Small refactoring (#94)

* Small refactoring

* Fix type

* Fix python codestyle

* Incremental fix code style (#95)

* Fix style (#96)

* Fix copyright

* Fix code style

* Branch clean up (#97)

* Optimize includes and force opset10 (#98)

* Optimize includes

* Force opset10 in pt fe

* Fix codestyle (#99)

* Fix style

* Fix clang codestyle

* Fix cerr with debug log

* Update src/bindings/python/src/pyopenvino/frontend/pytorch/decoder.cpp

* Add pytorch dependency only if pytorch frontend is enabled

* Update src/bindings/python/src/pyopenvino/CMakeLists.txt

* Add layer tests to precommit (#100)

* Add layer tests to precommit

* Remove accidentally added files

* Apply code style on layer tests

* batch norm tests and fixes

* move default weight and bias to else block

* reduce code duplication

* Changes requested by a reviewer

* Changes requested by a reviewer

* Remove dependency from pytorch in pyopenvino (#102)

* Remove dependency from pytorch when fe is disabled

* Change docstring

* Remove pytorch FE dependency from pyopenvino

* Apply codestyle (#107)

* Apply codestyle

* Remove commented line

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix mock FE test (#108)

* Fix mock PE test (#111)

* Revert changes in StridedSlice (#114)

* Small refactoring (#116)

* Small refactoring

* Fix codestyle

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

* Update src/frontends/pytorch/src/op/group_norm.cpp

* Fix cmake copyright define (#117)

* Update src/frontends/pytorch/src/op/arange.cpp

* Apply suggestions from code review

* Update build configs (#120)

* FIx build configs

* Update type cast in full.cpp

* Apply review feedback (#121)

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue after master merge (#122)

* Fix issue after master merge

* Fix build

Co-authored-by: eaidova <ekaterina.aidova@intel.com>
Co-authored-by: bszmelcz <bartosz.szmelczynski@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: sikorsl1 <leonard.sikorski@intel.com>
Co-authored-by: Leonard Sikorski <l.sikorski123@gmail.com>
Co-authored-by: Mateusz <mateusz.mikolajczyk@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
This commit is contained in:
Maxim Vafin 2023-01-18 15:16:57 +01:00 committed by GitHub
parent 1794fb40a0
commit 53e699eaba
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
203 changed files with 10707 additions and 23 deletions

View File

@ -462,6 +462,13 @@ jobs:
WORKSPACE: $(INSTALL_DIR)
displayName: 'Samples Smoke Tests'
- script: |
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(REPO_DIR)/tools/mo/:$(LAYER_TESTS_DIR):$PYTHONPATH
export TEST_DEVICE=CPU
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/pytorch_tests/ -m precommit --junitxml=$(INSTALL_TEST_DIR)/TEST-pytorch.xmlTEST
displayName: 'PyTorch Layer Tests'
- script: |
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
export PYTHONPATH=$(REPO_DIR)/tools/mo/:$(LAYER_TESTS_DIR):$PYTHONPATH

View File

@ -124,6 +124,7 @@ jobs:
-DENABLE_TEMPLATE=OFF
-DENABLE_OV_ONNX_FRONTEND=OFF
-DENABLE_OV_PADDLE_FRONTEND=OFF
-DENABLE_OV_PYTORCH_FRONTEND=OFF
-DENABLE_OV_TF_FRONTEND=OFF
-S $(REPO_DIR)
-B $(BUILD_DIR)

View File

@ -135,6 +135,7 @@ jobs:
-DENABLE_INTEL_GNA=OFF \
-DENABLE_OV_TF_FRONTEND=OFF \
-DENABLE_OV_PADDLE_FRONTEND=OFF \
-DENABLE_OV_PYTORCH_FRONTEND=OFF \
-DENABLE_OV_ONNX_FRONTEND=OFF \
-DENABLE_PYTHON=OFF \
-DENABLE_TESTS=ON \

View File

@ -115,6 +115,7 @@ jobs:
-DENABLE_COMPILE_TOOL=OFF
-DENABLE_OV_TF_FRONTEND=OFF
-DENABLE_OV_PADDLE_FRONTEND=OFF
-DENABLE_OV_PYTORCH_FRONTEND=OFF
-DENABLE_OPENVINO_DEBUG=OFF
-S $(REPO_DIR)
-B $(BUILD_DIR)

View File

@ -130,6 +130,7 @@ jobs:
-DENABLE_TESTS=OFF ^
-DENABLE_OV_ONNX_FRONTEND=OFF ^
-DENABLE_OV_PADDLE_FRONTEND=OFF ^
-DENABLE_OV_PYTORCH_FRONTEND=OFF ^
-DENABLE_OV_TF_FRONTEND=OFF ^
$(REPO_DIR)
workingDirectory: $(BUILD_DIR)
@ -175,6 +176,7 @@ jobs:
-DENABLE_TESTS=OFF ^
-DENABLE_OV_ONNX_FRONTEND=OFF ^
-DENABLE_OV_PADDLE_FRONTEND=OFF ^
-DENABLE_OV_PYTORCH_FRONTEND=OFF ^
-DENABLE_OV_TF_FRONTEND=OFF ^
$(REPO_DIR)
workingDirectory: $(BUILD_DIR_2)

View File

@ -61,6 +61,7 @@ RUN cmake .. \
-DENABLE_PROFILING_ITT=OFF \
-DENABLE_SAMPLES=OFF \
-DENABLE_OV_PADDLE_FRONTEND=OFF \
-DENABLE_OV_PYTORCH_FRONTEND=OFF \
-DENABLE_OV_TF_FRONTEND=OFF \
-DENABLE_OPENVINO_DEBUG=OFF \
-DCMAKE_INSTALL_PREFIX=/openvino/dist

View File

@ -136,6 +136,13 @@ if(ENABLE_OV_PADDLE_FRONTEND)
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
endif()
if(ENABLE_OV_PYTORCH_FRONTEND)
ov_coverage_extract(INPUT "openvino" OUTPUT "pytorch_frontend"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/frontends/pytorch/*")
ov_coverage_genhtml(INFO_FILE "pytorch_frontend"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
endif()
if(ENABLE_OV_TF_FRONTEND)
ov_coverage_extract(INPUT "openvino" OUTPUT "tf_frontend"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/frontends/tensorflow/*")

View File

@ -151,6 +151,7 @@ ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at run
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Use system protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)

View File

@ -261,6 +261,19 @@ macro(ov_cpack_settings)
set(paddle_copyright "generic")
endif()
if(ENABLE_OV_PYTORCH_FRONTEND)
set(CPACK_COMPONENT_PYTORCH_DESCRIPTION "OpenVINO PyTorch Frontend")
set(CPACK_COMPONENT_PYTORCH_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_PYTORCH_PACKAGE_NAME "libopenvino-pytorch-frontend-${cpack_name_ver}")
# since we PYTORCH FE is linkable target, we need to call ldconfig (i.e. `def_triggers`)
set(CPACK_DEBIAN_PYTORCH_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm};${def_triggers}")
ov_debian_add_lintian_suppression(pytorch
# we have different package name strategy; it suggests libopenvino-pytorch-frontend202230
"package-name-doesnt-match-sonames")
list(APPEND frontends pytorch)
set(pytorch_copyright "generic")
endif()
#
# core_dev: depends on core and frontends (since frontends don't want to provide its own dev packages)
#

View File

@ -226,6 +226,15 @@ macro(ov_cpack_settings)
set(paddle_copyright "generic")
endif()
if(ENABLE_OV_PYTORCH_FRONTEND)
set(CPACK_COMPONENT_PYTORCH_DESCRIPTION "OpenVINO PyTorch Frontend")
set(CPACK_RPM_PYTORCH_PACKAGE_NAME "libopenvino-pytorch-frontend-${cpack_name_ver}")
set(CPACK_RPM_PYTORCH_POST_INSTALL_SCRIPT_FILE "${def_triggers}")
set(CPACK_RPM_PYTORCH_POST_UNINSTALL_SCRIPT_FILE "${def_triggers}")
_ov_add_package(frontend_packages pytorch)
set(pytorch_copyright "generic")
endif()
#
# core_dev: depends on core and frontends (since frontends don't want to provide its own dev packages)
#

View File

@ -12,6 +12,7 @@
# * `Runtime`: OpenVINO C++ and C Core & Inference Runtime, frontend common
# * `ONNX`: OpenVINO ONNX frontend
# * `Paddle`: OpenVINO Paddle frontend
# * `PyTorch`: OpenVINO PyTorch frontend
# * `TensorFlow`: OpenVINO TensorFlow frontend
#
# If no components are specified, `Runtime` component is provided:
@ -41,6 +42,9 @@
# `openvino::frontend::paddle`
# Paddle FrontEnd target (optional)
#
# `openvino::frontend::pytorch`
# PyTorch FrontEnd target (optional)
#
# `openvino::frontend::tensorflow`
# TensorFlow FrontEnd target (optional)
#
@ -61,6 +65,9 @@
# `OpenVINO_Frontend_Paddle_FOUND`
# OpenVINO Paddle frontend is available
#
# `OpenVINO_Frontend_PyTorch_FOUND`
# OpenVINO PyTorch frontend is available
#
# `OpenVINO_Frontend_TensorFlow_FOUND`
# OpenVINO TensorFlow frontend is available
#
@ -293,11 +300,13 @@ set(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND @ENABLE_OV_ONNX_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND @ENABLE_OV_PADDLE_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND @ENABLE_OV_TF_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND @ENABLE_OV_IR_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND @ENABLE_OV_PYTORCH_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_ONNX_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_Paddle_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_TensorFlow_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_IR_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_PyTorch_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND})
# if no components specified, only Runtime is provided
if(NOT ${CMAKE_FIND_PACKAGE_NAME}_FIND_COMPONENTS)

View File

@ -0,0 +1,21 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the FrontEnd C++ API.
"""
# flake8: noqa
from openvino.utils import add_openvino_libs_to_path
add_openvino_libs_to_path()
try:
from openvino.frontend.pytorch.py_pytorch_frontend import _FrontEndPytorchDecoder as Decoder
from openvino.frontend.pytorch.py_pytorch_frontend import _Type as DecoderType
except ImportError as err:
raise ImportError("OpenVINO PyTorch frontend is not available, please make sure the frontend is built."
"{}".format(err))

View File

@ -0,0 +1,319 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
# mypy: ignore-errors
from openvino.frontend.pytorch.py_pytorch_frontend import _FrontEndPytorchDecoder as Decoder
from openvino.frontend.pytorch.py_pytorch_frontend import _Type as DecoderType
from openvino.runtime import op, PartialShape, Type as OVType, OVAny, Shape
import warnings
import torch
def get_type_from_py_type(value):
if isinstance(value, float):
return OVType.f32
if isinstance(value, int):
return OVType.i32
if isinstance(value, bool):
return OVType.boolean
return OVType.dynamic
def ivalue_to_constant(ivalue):
ov_type = get_type_from_py_type(ivalue)
if ov_type.is_static():
return op.Constant(ov_type, Shape([]), [ivalue]).outputs()
if isinstance(ivalue, list):
assert len(ivalue) > 0, "Can't deduce type for empty list"
ov_type = get_type_from_py_type(ivalue[0])
assert ov_type.is_static(), "Can't deduce type for list"
return op.Constant(ov_type, Shape([len(ivalue)]), ivalue).outputs()
if ivalue.type() in pt_to_ov_type_map:
try:
ovshape = PartialShape(ivalue.size())
ovtype = pt_to_ov_type_map[ivalue.type()]
ov_const = op.Constant(ovtype, ovshape.get_shape(), ivalue.data_ptr())
except Exception:
# old variant that makes a slow data copying
warnings.warn("[ WARNING ] Constant wasn't able to convert from data_ptr.")
nvalues = ivalue.numpy()
ovtype = np_to_ov_type_map[str(nvalues.dtype)]
ovshape = PartialShape(nvalues.shape)
ov_const = op.Constant(ovtype, ovshape.get_shape(), nvalues.flatten().tolist())
return ov_const.outputs()
def get_value_from_getattr(getattr_node, self_module):
assert getattr_node.kind() == "prim::GetAttr", "Got node of kind not equal to prim::GetAttr"
# GetAttr nodes can be nested
stack = []
while getattr_node.kind() == "prim::GetAttr":
stack.append(getattr_node)
inputs = list(getattr_node.inputs())
if len(inputs) == 0:
break
getattr_node = inputs[0].node()
module = self_module
while len(stack) > 0:
node = stack.pop()
assert (hasattr(module, node.s("name")))
module = getattr(module, node.s("name"))
return module
pt_to_ov_type_map = {
"float": OVType.f32,
"int": OVType.i32,
"torch.float32": OVType.f32,
"torch.int32": OVType.i32,
"torch.bool": OVType.boolean,
"torch.int64": OVType.i64,
"torch.FloatTensor": OVType.f32,
"torch.IntTensor": OVType.i32,
"torch.LongTensor": OVType.i64,
"torch.BoolTensor": OVType.boolean,
}
pt_to_py_type_map = {
"float": "float",
"int": "int",
"torch.float32": "float",
"torch.int32": "int",
"torch.int64": "int",
"torch.bool": "bool",
}
np_to_ov_type_map = {
"float32": OVType.f32,
"int32": OVType.i32,
}
class TorchScriptPythonDecoder (Decoder):
def __init__(self, pt_module, graph_element=None):
Decoder.__init__(self)
# We store every decoder created by this decoder so that all them are not deleted until the first decoder is deleted
self.m_decoders = []
if graph_element is None:
assert hasattr(pt_module, "inlined_graph"), "graph_element must have inlined_graph"
self.graph_element = pt_module.inlined_graph
else:
self.graph_element = graph_element
self.pt_module = pt_module
def inputs(self):
return [x.unique() for x in self.graph_element.inputs()]
def get_input(self, index):
return self.inputs()[index]
def get_input_shape(self, index):
raw_input = self._raw_input(index)
return self.get_shape_for_value(raw_input)
def get_input_type(self, index):
raw_input = self._raw_input(index)
return self.get_type_for_value(raw_input)
def get_output_shape(self, index):
output = self._raw_output(index)
return self.get_shape_for_value(output)
def get_output_type(self, index):
output = self._raw_output(index)
return self.get_type_for_value(output)
def _get_known_type_for_value(self, pt_type):
"""Returns known/unknown types wrapped as OVAny."""
# Check for simple scalar types first
if pt_type is None:
return OVAny(OVType.dynamic)
# TODO: Don't use str, use native types
if str(pt_type) in pt_to_ov_type_map:
return OVAny(pt_to_ov_type_map[str(pt_type)])
elif pt_type.__class__ is torch.TensorType:
# Tensor type, parse element type
return OVAny(DecoderType.Tensor(self._get_known_type_for_value(pt_type.dtype())))
elif pt_type.__class__ is torch.ListType:
element_type = pt_type.getElementType()
return OVAny(DecoderType.List(self._get_known_type_for_value(element_type)))
else:
# Not yet recognized
return OVAny(OVType.dynamic)
def get_shape_for_value(self, value):
if value.isCompleteTensor():
ps = PartialShape(value.type().sizes())
return ps
else:
# TODO: Recognize types that we can represent as a nested constructs with objects from DecoderType
# If recognized, return scalar instead of dynamic. Scalar means a single value of that custom type.
# See get_type_for_value for reference
pass
return PartialShape.dynamic()
def get_type_for_value(self, value):
full_type = self._get_known_type_for_value(value.type())
return full_type
def get_input_transpose_order(self, index):
raw_input = self._raw_input(index)
if raw_input.type() is not None and raw_input.type().kind() == "TensorType":
strides = raw_input.type().strides()
if strides is not None:
return [s[0] for s in sorted(enumerate(strides), key=lambda x:x[1], reverse=True)]
return []
def get_output_transpose_order(self, index):
output = self._raw_output(index)
if output.type() is not None and output.type().kind() == "TensorType":
strides = output.type().strides()
if strides is not None:
return [s[0] for s in sorted(enumerate(strides), key=lambda x:x[1], reverse=True)]
return []
def get_subgraph_size(self):
return len(self.get_subgraphs()) if hasattr(self.graph_element, "blocks") else 1
def visit_subgraph(self, node_visitor):
# make sure topological order is satisfied
for node in self.graph_element.nodes():
decoder = TorchScriptPythonDecoder(self.pt_module, node)
self.m_decoders.append(decoder)
node_visitor(decoder)
def get_subgraphs(self):
return list(self.graph_element.blocks())
def get_subgraph_decoder(self, index):
decoder = TorchScriptPythonDecoder(self.pt_module, self.get_subgraphs()[index])
self.m_decoders.append(decoder)
return decoder
def get_op_type(self):
return self.graph_element.kind()
def get_schema(self):
return self.graph_element.schema()
def outputs(self):
return [x.unique() for x in self.graph_element.outputs()]
def _raw_outputs(self):
return list(self.graph_element.outputs())
def _raw_output(self, index):
return self._raw_outputs()[index]
def _raw_inputs(self):
return list(self.graph_element.inputs())
def _raw_input(self, index):
return self._raw_inputs()[index]
def num_of_outputs(self):
return len(self.outputs())
def output(self, index):
return self.outputs()[index]
def mark_node(self, node):
return node
def try_decode_get_attr(self):
pt_value = get_value_from_getattr(self.graph_element, self.pt_module)
assert pt_value is not None, "Couldn't retrieve value from prim::GetAttr"
if not isinstance(pt_value, torch.jit.ScriptModule) or isinstance(pt_value, torch.jit.TracedModule):
return ivalue_to_constant(pt_value)
else:
return []
def as_constant(self):
if not self.get_op_type() == "prim::Constant":
return None
pt_value = self._raw_output(0)
pt_type_class = pt_value.type().__class__
if pt_type_class is torch.TensorType:
return self.as_constant_tensor(pt_value)
if pt_type_class is torch.ListType:
return self.as_constant_list(pt_value)
if str(pt_value.type()) in ["torch.int32", "int"]:
return op.Constant(OVType.i32, Shape([]), [pt_value.toIValue()]).outputs()
if str(pt_value.type()) in ["torch.float", "torch.FloatType", "float"]:
return op.Constant(OVType.f32, Shape([]), [pt_value.toIValue()]).outputs()
if str(pt_value.type()) in ["torch.bool", "bool"]:
return op.Constant(OVType.boolean, Shape([]), [pt_value.toIValue()]).outputs()
return None
def as_string(self):
if not self.get_op_type() == "prim::Constant":
return None
pt_value = self._raw_output(0)
if str(pt_value.type()) in ["torch.StringType", "str"]:
return pt_value.toIValue()
return None
def as_constant_tensor(self, pt_value):
ivalue = pt_value.toIValue()
if pt_value.isCompleteTensor():
try:
ivalue = ivalue.to(memory_format=torch.contiguous_format).detach().cpu()
except Exception:
warnings.warn("[ WARNING ] Tensor couldn't detach")
if str(pt_value.type().dtype()) in pt_to_ov_type_map:
# Constant interpretation doesn't respect new-full type of PT
# It recognizes only tensors, and give lists as 1D tensors, and scalars as Tensor scalars
# So only tensor-type constants are supported
ovshape = PartialShape(pt_value.type().sizes())
ovtype = pt_to_ov_type_map[str(pt_value.type().dtype())]
# TODO: try-except here is a temporary WA for issues with data_ptr that we currently cannot predict; provide better solution
try:
# this is only possible with adding a new ctor for Constant Python binding
# TODO Check strides and pass them somehow
values = ivalue.data_ptr()
ov_const = op.Constant(ovtype, ovshape.get_shape(), values)
except Exception:
# old variant that makes a slow data copying
warnings.warn("[ WARNING ] Constant wasn't able to convert from data_ptr.")
values = ivalue.flatten().tolist()
ov_const = op.Constant(ovtype, ovshape.get_shape(), values)
return ov_const.outputs()
else:
return ivalue_to_constant(ivalue)
return None
def as_constant_list(self, pt_value):
# For now it is treat a list as a 1D tensor; it is required by converters to avoid need to massively
# rewrite them in that part where constant attributes are queried
pt_element_type = str(pt_value.type().getElementType())
ivalue = pt_value.toIValue()
is_known_type = pt_element_type in pt_to_ov_type_map
if is_known_type:
ovtype = pt_to_ov_type_map[pt_element_type]
ovshape = PartialShape([len(ivalue)])
ov_const = op.Constant(ovtype, ovshape.get_shape(), ivalue)
return ov_const.outputs()
def input_is_none(self, index):
if index >= len(self.inputs()) or self._raw_input(index) is None:
return True
else:
r_input = self._raw_input(index)
if str(r_input.type()) in ["torch.NoneType", "NoneType"]:
return True
else:
in_node = r_input.node()
if in_node.kind() == "prim::GetAttr":
pt_value = get_value_from_getattr(in_node, self.pt_module)
return pt_value is None
return False

View File

@ -61,10 +61,14 @@ if(TARGET openvino::frontend::paddle)
add_subdirectory(frontend/paddle)
endif()
if(TARGET openvino::frontend::pytorch)
add_subdirectory(frontend/pytorch)
endif()
# create target
file(GLOB_RECURSE SOURCES core/*.cpp graph/*.cpp frontend/*.cpp utils/*cpp pyopenvino.cpp)
list(FILTER SOURCES EXCLUDE REGEX frontend/onnx|tensorflow|paddle/* )
file(GLOB_RECURSE SOURCES core/*.cpp graph/*.cpp frontend/*.cpp utils/*.cpp pyopenvino.cpp)
list(FILTER SOURCES EXCLUDE REGEX frontend/onnx|tensorflow|paddle|pytorch/* )
pybind11_add_module(${PROJECT_NAME} MODULE NO_EXTRAS ${SOURCES})

View File

@ -0,0 +1,15 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "decoder.hpp"
#include "openvino/frontend/decoder.hpp"
namespace py = pybind11;
using namespace ov::frontend;
void regclass_frontend_IDecoder(py::module m) {
py::class_<IDecoder, PyIDecoder, std::shared_ptr<IDecoder>>(m, "_IDecoder");
}

View File

@ -0,0 +1,18 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <pybind11/pybind11.h>
#include "openvino/frontend/decoder.hpp"
namespace py = pybind11;
class PyIDecoder : public ov::frontend::IDecoder {
public:
using IDecoder::IDecoder; // Inherit constructors
};
void regclass_frontend_IDecoder(py::module m);

View File

@ -25,16 +25,21 @@ void regclass_frontend_FrontEnd(py::module m) {
fem.def(
"load",
[](FrontEnd& self, const py::object& path) {
std::string model_path = Common::utils::convert_path_to_string(path);
return self.load(model_path);
[](FrontEnd& self, const py::object& py_obj) {
try {
std::string model_path = Common::utils::convert_path_to_string(py_obj);
return self.load(model_path);
} catch (...) {
// Extended for one argument only for this time
return self.load({Common::utils::py_object_to_any(py_obj)});
}
},
py::arg("path"),
R"(
Loads an input model by specified model file path.
Loads an input model.
:param path: Main model file path.
:type path: Union[str, pathlib.Path]
:param path: Object describing the model. It can be path to model file.
:type path: Any
:return: Loaded input model.
:rtype: openvino.frontend.InputModel
)");

View File

@ -0,0 +1,6 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(${pyopenvino_SOURCE_DIR}/frontend/frontend_module.cmake)
frontend_module(py_pytorch_frontend pytorch ${OV_CPACK_COMP_PYTHON_OPENVINO}_${pyversion})

View File

@ -0,0 +1,33 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <pybind11/functional.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
#include "decoder.hpp"
#include "openvino/frontend/decoder.hpp"
namespace py = pybind11;
using namespace ov::frontend;
using ov::Any;
void regclass_frontend_pytorch_decoder(py::module m) {
py::class_<pytorch::TorchDecoder, IDecoder, PyDecoder, std::shared_ptr<pytorch::TorchDecoder>>(m, "_FrontEndPytorchDecoder")
.def(py::init<>());
auto type_module = m.def_submodule("_Type");
// Register classes for TorchScript type system
py::class_<type::Tensor>(type_module, "Tensor").
def(py::init<Any>());
py::class_<type::List>(type_module, "List").
def(py::init<Any>());
py::class_<type::Str>(type_module, "Str").
def(py::init<>());
}

View File

@ -0,0 +1,106 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <pybind11/pybind11.h>
#include "openvino/frontend/pytorch/decoder.hpp"
namespace py = pybind11;
/// Trampoline class to support inheritence from TorchDecoder in Python
class PyDecoder : public ov::frontend::pytorch::TorchDecoder {
using ov::frontend::pytorch::TorchDecoder::TorchDecoder;
ov::Any const_input(size_t index) const override {
PYBIND11_OVERRIDE_PURE(ov::Any, TorchDecoder, const_input, index);
}
size_t input(size_t index) const override {
PYBIND11_OVERRIDE_PURE(size_t, TorchDecoder, get_input, index);
}
const std::vector<size_t>& inputs() const override {
PYBIND11_OVERRIDE_PURE(const std::vector<size_t>&, TorchDecoder, inputs);
}
ov::PartialShape get_input_shape(size_t index) const override {
PYBIND11_OVERRIDE_PURE(ov::PartialShape, TorchDecoder, get_input_shape, index);
}
ov::Any get_input_type(size_t index) const override {
PYBIND11_OVERRIDE_PURE(ov::Any, TorchDecoder, get_input_type, index);
}
const std::vector<size_t>& get_input_transpose_order(size_t index) const override {
PYBIND11_OVERRIDE_PURE(const std::vector<size_t>&, TorchDecoder, get_input_transpose_order, index);
}
const std::vector<size_t>& get_output_transpose_order(size_t index) const override {
PYBIND11_OVERRIDE_PURE(const std::vector<size_t>&, TorchDecoder, get_output_transpose_order, index);
}
ov::PartialShape get_output_shape(size_t index) const override {
PYBIND11_OVERRIDE_PURE(ov::PartialShape, TorchDecoder, get_output_shape, index);
}
ov::Any get_output_type(size_t index) const override {
PYBIND11_OVERRIDE_PURE(ov::Any, TorchDecoder, get_output_type, index);
}
bool input_is_none(size_t index) const override {
PYBIND11_OVERRIDE_PURE(bool, TorchDecoder, input_is_none, index);
}
ov::OutputVector try_decode_get_attr() const override {
PYBIND11_OVERRIDE_PURE(ov::OutputVector, TorchDecoder, try_decode_get_attr);
}
ov::OutputVector as_constant() const override {
PYBIND11_OVERRIDE_PURE(ov::OutputVector, TorchDecoder, as_constant);
}
const std::string& as_string() const override {
PYBIND11_OVERRIDE_PURE(const std::string&, TorchDecoder, as_string);
}
const std::string& get_op_type() const override {
PYBIND11_OVERRIDE_PURE(const std::string&, TorchDecoder, get_op_type);
}
const std::string& get_schema() const override {
PYBIND11_OVERRIDE_PURE(const std::string&, TorchDecoder, get_schema);
}
size_t num_of_outputs() const override {
PYBIND11_OVERRIDE_PURE(size_t, TorchDecoder, num_of_outputs);
}
const std::vector<size_t>& outputs() const override {
PYBIND11_OVERRIDE_PURE(const std::vector<size_t>&, TorchDecoder, outputs);
}
size_t output(size_t index) const override {
PYBIND11_OVERRIDE_PURE(size_t, TorchDecoder, output, index);
}
std::shared_ptr<ov::Node> mark_node(std::shared_ptr<ov::Node> ov_node) const override {
PYBIND11_OVERRIDE_PURE(std::shared_ptr<ov::Node>, TorchDecoder, mark_node, ov_node);
}
size_t get_subgraph_size() const override {
PYBIND11_OVERRIDE_PURE(size_t, TorchDecoder, get_subgraph_size);
}
void visit_subgraph(std::function<void(std::shared_ptr<TorchDecoder>)> node_visitor) const override {
PYBIND11_OVERRIDE_PURE(void, TorchDecoder, visit_subgraph, node_visitor);
}
std::shared_ptr<TorchDecoder> get_subgraph_decoder(size_t index) const override {
PYBIND11_OVERRIDE_PURE(std::shared_ptr<TorchDecoder>, TorchDecoder, get_subgraph_decoder, index);
}
};
void regclass_frontend_pytorch_decoder(py::module m);

View File

@ -0,0 +1,13 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <pybind11/pybind11.h>
#include "decoder.hpp"
namespace py = pybind11;
PYBIND11_MODULE(py_pytorch_frontend, m) {
regclass_frontend_pytorch_decoder(m);
}

View File

@ -80,6 +80,12 @@ void regclass_graph_op_Constant(py::module m) {
constant.def(py::init<const ov::element::Type&, const ov::Shape&, const std::vector<uint16_t>&>());
constant.def(py::init<const ov::element::Type&, const ov::Shape&, const std::vector<uint32_t>&>());
constant.def(py::init<const ov::element::Type&, const ov::Shape&, const std::vector<uint64_t>&>());
constant.def(py::init([](const ov::element::Type& et, const ov::Shape& sh, int64_t p) {
// restore pointer from integer
// TODO: Align on bit width
void* pp = reinterpret_cast<void*>(p);
return std::make_shared<ov::op::v0::Constant>(et, sh, pp);
}));
constant.def("get_value_strings", &ov::op::v0::Constant::get_value_strings);

View File

@ -34,6 +34,7 @@
#include "pyopenvino/core/tensor.hpp"
#include "pyopenvino/core/variable_state.hpp"
#include "pyopenvino/core/version.hpp"
#include "pyopenvino/frontend/decoder.hpp"
#include "pyopenvino/frontend/extension.hpp"
#include "pyopenvino/frontend/frontend.hpp"
#include "pyopenvino/frontend/input_model.hpp"
@ -235,6 +236,7 @@ PYBIND11_MODULE(_pyopenvino, m) {
regclass_frontend_FrontEnd(m);
regclass_frontend_InputModel(m);
regclass_frontend_NodeContext(m);
regclass_frontend_IDecoder(m);
// frontend extensions
regclass_frontend_TelemetryExtension(m);

View File

@ -12,6 +12,7 @@
#include <vector>
#include "Python.h"
#include "openvino/frontend/decoder.hpp"
namespace Common {
namespace utils {
@ -233,6 +234,14 @@ ov::Any py_object_to_any(const py::object& py_obj) {
return py::cast<ov::streams::Num>(py_obj);
} else if (py::isinstance<ov::Affinity>(py_obj)) {
return py::cast<ov::Affinity>(py_obj);
// FrontEnd Decoder
} else if (py::isinstance<ov::frontend::IDecoder>(py_obj)) {
return py::cast<std::shared_ptr<ov::frontend::IDecoder>>(py_obj);
// Custom FrontEnd Types
} else if (py::isinstance<ov::frontend::type::Tensor>(py_obj)) {
return py::cast<ov::frontend::type::Tensor>(py_obj);
} else if (py::isinstance<ov::frontend::type::List>(py_obj)) {
return py::cast<ov::frontend::type::List>(py_obj);
// If there is no match fallback to py::object
} else if (py::isinstance<py::object>(py_obj)) {
return py_obj;

View File

@ -5,15 +5,16 @@
#pragma once
#include <pybind11/pybind11.h>
#include <openvino/core/any.hpp>
#include <openvino/core/type/element_type.hpp>
#include <openvino/runtime/properties.hpp>
#include "openvino/core/any.hpp"
#include "openvino/core/type/element_type.hpp"
#include "openvino/runtime/properties.hpp"
namespace py = pybind11;
namespace Common {
namespace utils {
py::object from_ov_any(const ov::Any &any);
py::object from_ov_any(const ov::Any& any);
std::map<std::string, ov::Any> properties_to_any_map(const std::map<std::string, py::object>& properties);

View File

@ -365,8 +365,12 @@ InputModel::Ptr FrontEndMockPy::load_impl(const std::vector<ov::Any>& params) co
m_telemetry->send_error("load_impl_error");
m_telemetry->send_stack_trace("mock_stack_trace");
}
if (!params.empty() && params[0].is<std::string>()) {
m_stat.m_load_paths.push_back(params[0].as<std::string>());
if (!params.empty()) {
if (params[0].is<std::string>()) {
m_stat.m_load_paths.push_back(params[0].as<std::string>());
} else {
throw ov::Exception("Only path is supported.");
}
}
return std::make_shared<InputModelMockPy>();

View File

@ -100,7 +100,7 @@ def test_load_wrong_path():
assert fe is not None
with pytest.raises(RuntimeError) as e:
fe.load(TestClass())
assert "Path: 'test class' does not exist. Please provide valid model's path either as a string, bytes or pathlib.Path" in str(e.value)
assert "Only path is supported." in str(e.value)
@mock_needed

View File

@ -119,6 +119,13 @@ LIB_INSTALL_CFG = {
"rpath": LIBS_RPATH,
"binary_dir": OPENVINO_BUILD_DIR,
},
"pytorch_libs": {
"name": "pytorch",
"prefix": "libs.pytorch",
"install_dir": OV_RUNTIME_LIBS_DIR,
"rpath": LIBS_RPATH,
"binary_dir": OPENVINO_BUILD_DIR,
},
"onnx_libs": {
"name": "onnx",
"prefix": "libs.onnx",

View File

@ -58,6 +58,10 @@ public:
return m_attrs.at(key);
}
attrs_t::const_iterator find(const std::string& key) const {
return m_attrs.find(key);
}
bool operator==(const FrameworkNodeAttrs& other) const {
return m_type_name == other.m_type_name && m_opset_name == other.m_opset_name && m_attrs == other.m_attrs;
}

View File

@ -446,7 +446,7 @@ std::set<ov::Input<ov::Node>> ov::Node::get_output_target_inputs(size_t i) const
}
ov::descriptor::Tensor& ov::Node::get_output_tensor(size_t i) const {
NGRAPH_CHECK(i < m_outputs.size(), "index '", i, "' out of range in get_output_tensor(size_t i)");
NGRAPH_CHECK(i < m_outputs.size(), "index '", i, "' out of range in get_output_tensor(size_t i) for node ", *this);
return m_outputs[i].get_tensor();
}

View File

@ -152,14 +152,17 @@ void ov::op::v8::If::validate_and_infer_types() {
auto else_node_result =
m_bodies[ELSE_BODY_INDEX]->get_results().at(else_desc->m_body_value_index)->input_value(0);
element::Type merged_type;
NODE_VALIDATION_CHECK(this,
then_node_result.get_element_type() == else_node_result.get_element_type(),
element::Type::merge(merged_type,
then_node_result.get_element_type(),
else_node_result.get_element_type()),
"type of then_body output is not equal type of else_body output");
// shape inference for output and associated with it body outputs
auto partial_shape =
resolve_shape(then_node_result.get_partial_shape(), else_node_result.get_partial_shape());
set_output_type(output_index, then_node_result.get_element_type(), partial_shape);
set_output_type(output_index, merged_type, partial_shape);
}
}
}

View File

@ -194,7 +194,7 @@ void op::v4::Interpolate::validate_and_infer_types() {
NODE_VALIDATION_CHECK(this,
input_et == element::f32 || input_et == element::f16 || input_et == element::i8 ||
input_et == element::bf16 || input_et == element::u8 || input_et == element::i64 ||
input_et == element::i32,
input_et == element::i32 || input_et == element::dynamic,
"Input element type must be f32, f16, bf16, i8, u8, i64, i32");
element::Type sizes_et = get_input_element_type(1);

View File

@ -42,8 +42,9 @@ void op::v3::ScatterElementsUpdate::validate_and_infer_types() {
NODE_VALIDATION_CHECK(this, axis_et.is_integral(), "Axis element type must be integral_number, but is: ", axis_et);
element::Type merged_type;
NODE_VALIDATION_CHECK(this,
data_et == updates_et,
element::Type::merge(merged_type, data_et, updates_et),
"Data type and updates type are required to be the same. ",
"Got: ",
data_et,

View File

@ -36,10 +36,11 @@ void op::v4::Swish::validate_and_infer_types() {
"Swish must have 1 or 2 inputs, but it has: ",
inputs_count);
auto in_type = get_input_element_type(0);
NODE_VALIDATION_CHECK(this,
get_input_element_type(0).is_real(),
in_type.is_dynamic() || in_type.is_real(),
"Swish input tensor must be floating point type(",
get_input_element_type(0),
in_type,
").");
if (inputs_count == 2) {

View File

@ -157,7 +157,8 @@ void ov::op::util::FrameworkNode::validate_and_infer_types() {
reset_output_shape_to_dynamic = true;
} else {
NODE_VALIDATION_CHECK(this,
m_inputs_desc[i] == std::make_tuple(input_pshape, input_type),
std::get<0>(m_inputs_desc[i]).compatible(input_pshape) &&
std::get<1>(m_inputs_desc[i]).compatible(input_type),
get_error_message());
}
}

View File

@ -16,6 +16,10 @@ if(ENABLE_OV_PADDLE_FRONTEND)
add_subdirectory(paddle)
endif()
if(ENABLE_OV_PYTORCH_FRONTEND)
add_subdirectory(pytorch)
endif()
if(ENABLE_OV_IR_FRONTEND)
add_subdirectory(ir)
endif()

View File

@ -0,0 +1,48 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/core/any.hpp"
namespace ov {
namespace frontend {
// Extendable type system which reflects Framework data types
// Type nestings are built with the help of ov::Any
namespace type {
struct Tensor {
Tensor() = default;
explicit Tensor(const Any& _element_type) : element_type(_element_type) {}
Any element_type;
};
struct Tuple;
struct List {
List() = default;
// Specifies list of elements of element_type type, all elements have the same given type
explicit List(const Any& _element_type) : element_type(_element_type) {}
Any element_type;
};
struct Str {};
struct Optional;
struct Dict;
struct NamedTuple;
struct Union;
} // namespace type
/// Plays a role of node, block and module decoder
class IDecoder {
public:
virtual ~IDecoder() = default;
};
} // namespace frontend
} // namespace ov

View File

@ -16,6 +16,7 @@ namespace frontend {
class FRONTEND_API NodeContext {
public:
// TODO: Why this ctor is explicit when get_op_type is virtual so m_op_type looks to be a custom implementation
explicit NodeContext(const std::string& op_type) : m_op_type(op_type) {}
virtual ~NodeContext() = default;
@ -87,6 +88,18 @@ public:
/// \brief Returns node attribute by name as ov::Any.
virtual ov::Any get_attribute_as_any(const std::string& name) const = 0;
/// \brief Returns the number of sub-graphs that can be enumerated with get_subgraph
virtual size_t get_subgraph_size() const {
FRONT_END_NOT_IMPLEMENTED(get_subgraph_size);
}
/// \brief Returns subgraph converted on demand by the first access
/// If there is no query for specific sub-graph it shouldn't be converted
/// idx should be in range 0..get_subgraph_size()-1
virtual std::shared_ptr<Model> get_subgraph(int idx) const {
FRONT_END_NOT_IMPLEMENTED(get_subgraph);
}
private:
virtual ov::Any apply_additional_conversion_rules(const ov::Any& data, const std::type_info& type_info) const {
return data;

View File

@ -9,6 +9,7 @@
#include "openvino/frontend/exception.hpp"
#include "openvino/util/env_util.hpp"
#include "openvino/util/log.hpp"
#include "plugin_loader.hpp"
#include "utils.hpp"
@ -49,6 +50,7 @@ public:
{"onnx", "onnx"},
{"tf", "tensorflow"},
{"paddle", "paddle"},
{"pytorch", "pytorch"},
};
auto it = predefined_frontends.find(framework);
std::lock_guard<std::mutex> guard(m_loading_mutex);
@ -79,6 +81,7 @@ public:
std::lock_guard<std::mutex> guard(m_loading_mutex);
for (auto& plugin_info : m_plugins) {
if (!plugin_info.load()) {
OPENVINO_DEBUG << "Frontend load failed: " << plugin_info.m_file_path << "\n";
continue;
}
names.push_back(plugin_info.get_creator().m_name);

View File

@ -0,0 +1,5 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
add_subdirectory(src)

View File

@ -0,0 +1,125 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <memory>
#include "openvino/core/any.hpp"
#include "openvino/core/node.hpp"
#include "openvino/core/node_output.hpp"
#include "openvino/core/partial_shape.hpp"
#include "openvino/frontend/decoder.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
/// Plays a role of node, block and module decoder (kind of temporary fat API)
class TorchDecoder : public IDecoder {
public:
// Do not search for input in tensor map; try to access it as a constant of specified type T and return its value
// Using Any here is an easy way to avoid template definition, returned object is supposed to be of one of the
// fundamental types like int, float etc.
virtual Any const_input(size_t index) const = 0;
// Using size_t for input/output unuque ids are in sync with torch code, see def in
// torch/include/torch/csrc/jit/ir/ir.h, Value::unique_
// TODO: set of input and output methods are not aligned; also they are not aligned with the rest of FEs
// Input tensor id
virtual size_t input(size_t index) const = 0;
virtual const std::vector<size_t>& inputs() const = 0;
// ------------------------------
// TODO: physically inputs and outputs refer to PT Values so shape/type is not a property of input/output
// Do we need a separate Decoder for Tensor to request properties of it instead of having an impression
// that inputs/outputs have types and shapes?
// Return shape if inputs has torch::Tensor type in the original model, otherwise returns the shape [] of a scalar
virtual PartialShape get_input_shape(size_t index) const = 0;
// Return element::Type when it the original type can be represented, otherwise returns PT-sepcific data type object
// (see custom_type.hpp)
virtual Any get_input_type(size_t index) const = 0;
// TODO: Consider deleting this method, probably it doesn't make sence outside Torch JIT execution
virtual const std::vector<size_t>& get_input_transpose_order(size_t index) const = 0;
// TODO: Consider deleting this method, probably it doesn't make sence outside Torch JIT execution
virtual const std::vector<size_t>& get_output_transpose_order(size_t index) const = 0;
// Return shape if inputs has torch::Tensor type in the original model, otherwise returns the shape [] of a scalar
virtual PartialShape get_output_shape(size_t index) const = 0;
// Return element::Type when it the original type can be represented, otherwise returns PT-sepcific data type object
// (see custom_type.hpp)
virtual Any get_output_type(size_t index) const = 0;
// ------------------------------
// TODO: required? can be implemented in the context of a single node?
virtual bool input_is_none(size_t index) const = 0;
virtual OutputVector try_decode_get_attr() const = 0;
// Work for natural constant nodes, e.g. for prim::Constant; don't know other nodes kinds that fit
// TODO: why OutputVector instead of just single output?
virtual OutputVector as_constant() const = 0;
// Get string from constant. Work for natural constant nodes, e.g. for prim::Constant; don't know other nodes kinds
// that fit
virtual const std::string& as_string() const = 0;
// Returns PT node kind as a string mnemonics for native type uint32_t Symbol in Torch
// Decide whether we need an equivalent member for integer representation (in this case a map is required to
// understand what it means)
virtual const std::string& get_op_type() const = 0;
// Returns PT node schema as a string
virtual const std::string& get_schema() const = 0;
// TODO: use canonical name output_size
virtual size_t num_of_outputs() const = 0;
// Return a vector of output IDs
virtual const std::vector<size_t>& outputs() const = 0;
// Return a vector of output IDs
virtual size_t output(size_t index) const = 0;
// Embed mapping to/from the original node representation from/to node passed as a parameter
// the representation of this mapping is specific for particular decored type and may be NOP
// returns the same node as syntactically convenient way to make nested sentences in code
virtual std::shared_ptr<Node> mark_node(std::shared_ptr<Node> ov_node) const = 0;
// Call mark_node for each node from the vector
void mark_nodes(std::vector<std::shared_ptr<Node>> ov_nodes) const {
for (auto& ov_node : ov_nodes) {
mark_node(ov_node);
}
}
// Syntactic sugar around mark_node -- just calls it for corresponding node for the passed output port
Output<Node> mark_output(Output<Node> ov_output) const {
mark_node(ov_output.get_node_shared_ptr());
return ov_output;
}
/// \brief Returns the number of sub-graphs that can be enumerated with get_subgraph
virtual size_t get_subgraph_size() const = 0;
/// \brief Returns subgraph converted on demand by the first access
/// If there is no query for specific sub-graph it shouldn't be converted
// node_visitor is a function that will be fed by nodes in subgraph for all nodes in graph
virtual void visit_subgraph(std::function<void(std::shared_ptr<TorchDecoder>)> node_visitor) const = 0;
/// Probably this toghether with immediate nodes visitor is a replacement for visit_subgraphs with an index
virtual std::shared_ptr<TorchDecoder> get_subgraph_decoder(size_t index) const = 0;
};
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,65 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/frontend.hpp"
#include "openvino/frontend/pytorch/visibility.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
class PYTORCH_API FrontEnd : public ov::frontend::FrontEnd {
public:
using Ptr = std::shared_ptr<FrontEnd>;
/// \brief Completely convert and normalize entire Model, throws if it is not possible
/// \param model Input model
/// \return fully converted OV Model
std::shared_ptr<Model> convert(const ov::frontend::InputModel::Ptr& model) const override;
/// \brief Completely convert the remaining, not converted part of a Model.
/// \param partiallyConverted partially converted OV Model
void convert(const std::shared_ptr<Model>& partiallyConverted) const override;
/// \brief Convert only those parts of the model that can be converted leaving others
/// as-is. Converted parts are not normalized by additional transformations; normalize
/// function or another form of convert function should be called to finalize the
/// conversion process.
/// \param model Input model
/// \return partially converted OV Model
std::shared_ptr<Model> convert_partially(const InputModel::Ptr& model) const override;
/// \brief Convert operations with one-to-one mapping with decoding nodes.
/// Each decoding node is an OV node representing a single FW operation node with
/// all attributes represented in FW-independent way.
/// \param model Input model
/// \return OV Model after decoding
std::shared_ptr<Model> decode(const InputModel::Ptr& model) const override;
/// \brief Runs normalization passes on Model that was loaded with partial conversion
/// \param Model partially converted OV Model
void normalize(const std::shared_ptr<ov::Model>& model) const override;
/// \brief Gets name of this FrontEnd. Can be used by clients
/// if frontend is selected automatically by FrontEndManager::load_by_model
/// \return Paddle frontend name.
std::string get_name() const override {
return "pytorch";
}
/// \brief Register base extension in the FrontEnd
/// \param extension base extension
void add_extension(const std::shared_ptr<ov::Extension>& extension) override;
protected:
bool supported_impl(const std::vector<ov::Any>& variants) const override;
ov::frontend::InputModel::Ptr load_impl(const std::vector<ov::Any>& variants) const override;
};
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,153 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/exception.hpp"
#include "openvino/frontend/node_context.hpp"
#include "openvino/frontend/pytorch/decoder.hpp"
#include "openvino/util/log.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
typedef std::unordered_map<size_t, Output<Node>> TensorMap;
class NodeContext : public frontend::NodeContext {
public:
NodeContext(std::shared_ptr<TorchDecoder> decoder,
TensorMap* tensor_map,
ParameterVector* external_parameters,
const TensorMap& ext_tensor_map)
: // TODO: why the following ctor is explicit?
frontend::NodeContext(decoder->get_op_type()),
m_decoder(decoder),
m_tensor_map(tensor_map),
m_ext_tensor_map(ext_tensor_map),
m_external_parameters(external_parameters) {}
// Do not search for input in tensor map; try to access it as a constant of specified type T and return its value
template <typename T>
T const_input(size_t index) const;
size_t get_input_size() const override {
return m_decoder->inputs().size();
};
// Search for input in tensor map and return an output port for already converted op
// TODO: int due to base class uses it, but naturally it should be size_t for PT
Output<Node> get_input(int index) const override {
FRONT_END_GENERAL_CHECK(!m_decoder->input_is_none(index), "Input is none with index: ", index);
auto input = m_decoder->input(index);
FRONT_END_GENERAL_CHECK(m_tensor_map->count(input), "No tensor corresponding input: ", input, " exist.");
return m_tensor_map->at(input);
}
// TODO: upstream to base class
OutputVector inputs() const {
OutputVector res;
for (size_t input : m_decoder->inputs()) {
FRONT_END_GENERAL_CHECK(m_tensor_map->count(input), "No tensor corresponding index: ", input, " exist.");
res.push_back(m_tensor_map->at(input));
}
return res;
}
bool input_is_none(size_t index) const {
return m_decoder->input_is_none(index);
}
// Convert the resulting value of this node to ov Constant; works correctly only for nodes that produce
// constant value, naturally for prim::Constant
OutputVector as_constant() const {
return m_decoder->as_constant();
}
/*
TODO: Should be uncommented when explicit NodeContext ctor won't require passing op_type
const std::string& get_op_type() const override {
return m_decoder->get_op_type();
}
*/
std::string get_schema() const {
return m_decoder->get_schema();
}
size_t num_of_outputs() const {
return m_decoder->num_of_outputs();
}
std::vector<size_t> outputs() const {
return m_decoder->outputs();
}
std::shared_ptr<Node> mark_node(std::shared_ptr<Node> ov_node) const {
return m_decoder->mark_node(ov_node);
}
void mark_nodes(std::vector<std::shared_ptr<Node>> ov_nodes) const {
return m_decoder->mark_nodes(ov_nodes);
}
Output<Node> mark_output(Output<Node> ov_output) const {
return m_decoder->mark_node(ov_output.get_node_shared_ptr());
}
Any get_attribute_as_any(const std::string&) const override {
throw std::runtime_error(
"There is no any named attributes in PyTorch node, query by attribute name is not implemented");
}
void mutate_input(size_t index, Output<Node> ov_output) {
FRONT_END_GENERAL_CHECK(!m_decoder->input_is_none(index), "Input is none with index: ", index);
auto input = m_decoder->input(index);
FRONT_END_GENERAL_CHECK(m_tensor_map->count(input), "No tensor corresponding input: ", input, " exist.");
m_tensor_map->at(input).get_tensor().set_names({std::to_string(input) + "_"});
// TODO: find out why this doesn't work
ov_output.get_tensor().add_names({std::to_string(input)});
(*m_tensor_map)[input] = ov_output;
m_mutated_tensors.insert(input);
}
std::set<size_t> get_mutated_tensors() const {
return m_mutated_tensors;
}
std::shared_ptr<TorchDecoder> get_decoder() const {
return m_decoder;
}
void add_tensor_to_context(size_t index, Output<Node> ov_output) {
if (m_tensor_map->count(index)) {
OPENVINO_DEBUG << "[ WARNING ] Current context has tensor. Rewriting.\n";
}
ov_output.get_tensor().add_names({std::to_string(index)});
(*m_tensor_map)[index] = ov_output;
}
Output<Node> get_tensor_from_model(size_t index) const {
if (m_tensor_map->find(index) != m_tensor_map->end()) {
return m_tensor_map->at(index);
} else {
return Output<Node>();
}
}
Output<Node> get_tensor_from_model_or_create_input(size_t index);
Output<Node> get_input_from_visible_context(size_t index) const;
std::shared_ptr<ov::Model> convert_subgraph(size_t index);
private:
std::shared_ptr<TorchDecoder> m_decoder;
std::set<size_t> m_mutated_tensors;
TensorMap* m_tensor_map;
const TensorMap& m_ext_tensor_map;
ParameterVector* m_external_parameters;
};
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,20 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/visibility.hpp"
#ifdef OPENVINO_STATIC_LIBRARY
# define PYTORCH_API
# define PYTORCH_C_API
#else
# ifdef openvino_pytorch_frontend_EXPORTS
# define PYTORCH_API OPENVINO_CORE_EXPORTS
# define PYTORCH_C_API OPENVINO_EXTERN_C OPENVINO_CORE_EXPORTS
# else
# define PYTORCH_API OPENVINO_CORE_IMPORTS
# define PYTORCH_C_API OPENVINO_EXTERN_C OPENVINO_CORE_IMPORTS
# endif // openvino_pytorch_frontend_EXPORTS
#endif // OPENVINO_STATIC_LIBRARY

View File

@ -0,0 +1,9 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
ov_add_frontend(NAME pytorch
LINKABLE_FRONTEND
SHUTDOWN_PROTOBUF
FILEDESCRIPTION "FrontEnd to load and convert TorchScript models from PyTorch"
LINK_LIBRARIES openvino::util openvino::runtime::dev)

View File

@ -0,0 +1,135 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/frontend.hpp"
#include "input_model.hpp"
#include "openvino/op/util/multi_subgraph_base.hpp"
#include "openvino/pass/constant_folding.hpp"
#include "openvino/util/log.hpp"
#include "pt_framework_node.hpp"
#include "transformations/control_flow/unroll_if.hpp"
#include "transforms.hpp"
#include "transforms/append_list_unpack_replacer.hpp"
#include "transforms/aten_cat_replacer.hpp"
#include "transforms/aten_getitem_replacer.hpp"
#include "transforms/max_prim_list_construct_replacer.hpp"
#include "transforms/prim_list_unpack_replacer.hpp"
#include "transforms/prim_tuple_construct_replacer.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace {
std::set<std::string> get_unconverted_types_from_model(const std::shared_ptr<Model>& model) {
std::set<std::string> unconverted_ops_types;
for (const auto& node : model->get_ordered_ops()) {
if (const auto& fw_node = ov::as_type_ptr<PtFrameworkNode>(node)) {
auto op_type = fw_node->get_decoder()->get_op_type();
unconverted_ops_types.insert(op_type);
}
if (const auto& fw_node = ov::as_type_ptr<ov::op::util::MultiSubGraphOp>(node)) {
for (int i = 0; i < fw_node->get_internal_subgraphs_size(); i++) {
auto internal_types = get_unconverted_types_from_model(fw_node->get_function(i));
unconverted_ops_types.insert(internal_types.begin(), internal_types.end());
}
}
}
return unconverted_ops_types;
}
} // namespace
std::shared_ptr<Model> FrontEnd::convert(const InputModel::Ptr& model) const {
auto converted_model = convert_partially(model);
normalize(converted_model);
std::set<std::string> unconverted_ops_types = get_unconverted_types_from_model(converted_model);
std::stringstream ops_str;
for (auto&& op_type : unconverted_ops_types) {
ops_str << op_type << '\n';
}
FRONT_END_OP_CONVERSION_CHECK(unconverted_ops_types.size() == 0,
"Model wasn't fully converted. Unconverted operation types:\n" + ops_str.str());
return converted_model;
}
void FrontEnd::convert(const std::shared_ptr<Model>& partiallyConverted) const {
FRONT_END_NOT_IMPLEMENTED(convert);
}
std::shared_ptr<Model> FrontEnd::convert_partially(const ov::frontend::InputModel::Ptr& model) const {
try {
auto pytorch_model = std::dynamic_pointer_cast<pytorch::InputModel>(model);
auto model = convert_pytorch_model(pytorch_model->m_model);
return model;
} catch (const std::runtime_error& e) {
std::cerr << "[ ERROR ] Unexpected error while converting pytorch model: " << e.what() << '\n';
std::cerr << "Rethrowing. Misleading error message from pybind11 may come next. TODO.";
throw;
}
}
std::shared_ptr<Model> FrontEnd::decode(const InputModel::Ptr& model) const {
FRONT_END_NOT_IMPLEMENTED(decode);
}
void FrontEnd::normalize(const std::shared_ptr<ov::Model>& model) const {
ov::pass::Manager manager;
manager.register_pass<ov::pass::ConstantFolding>();
manager.register_pass<ov::pass::UnrollIf>();
// Have to run UnrollIf second time, because conditions are defined outside of nested If (ticket 98155)
manager.register_pass<ov::pass::UnrollIf>();
manager.register_pass<ov::frontend::pytorch::pass::AtenCatToConcat>();
manager.register_pass<ov::frontend::pytorch::pass::AppendListUnpackReplacer>();
manager.register_pass<ov::frontend::pytorch::pass::PrimListUnpackReplacer>();
manager.register_pass<ov::frontend::pytorch::pass::AtenGetItemReplacer>();
manager.register_pass<ov::frontend::pytorch::pass::MaxPrimListConstructReplacer>();
manager.register_pass<ov::frontend::pytorch::pass::DecomposeTupleResults>();
manager.register_pass<ov::pass::ConstantFolding>();
manager.run_passes(model);
apply_pytorch_conversion_transforms(model);
// Usually if nn.Module.forward is given as a source model for conversion, there is the first Parameter
// that represents original `self` argument in forward(self, ...). `self` shouldn't play any role in model
// inference if model is completelly frozed and all methods are inlined. So we check if it doesn't have any
// consumers in the finally converted model and remove this parameter. This parameter should have index 0.
if (model->get_parameters().size() > 0) {
auto self = model->get_parameters()[0];
if (self->output(0).get_target_inputs().empty()) {
// There is no consumers: safe to remove
OPENVINO_DEBUG << "[ WARNING ] Removing parameter[0] in converted Pytorch model, because it is never used "
"and treated as `self`\n";
model->remove_parameter(self);
} else {
OPENVINO_DEBUG << "[ WARNING ] Couldn't remove parameter[0] in converted PyTorch model\n";
}
}
}
void FrontEnd::add_extension(const std::shared_ptr<ov::Extension>& extension) {
FRONT_END_NOT_IMPLEMENTED(add_extension);
}
bool FrontEnd::supported_impl(const std::vector<ov::Any>& variants) const {
return false;
}
ov::frontend::InputModel::Ptr FrontEnd::load_impl(const std::vector<ov::Any>& variants) const {
FRONT_END_GENERAL_CHECK(variants.size() == 1,
"PyTorch Frontend supports exactly one parameter in model representation, got ",
std::to_string(variants.size()),
" instead.");
auto decoder = variants[0].as<std::shared_ptr<IDecoder>>();
auto tdecoder = std::dynamic_pointer_cast<TorchDecoder>(decoder);
FRONT_END_GENERAL_CHECK(tdecoder, "Couldn't cast ov::Any to TorchDecoder");
return std::make_shared<pytorch::InputModel>(tdecoder);
}
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,25 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/pytorch/decoder.hpp"
#include "openvino/frontend/pytorch/frontend.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
class InputModel : public ov::frontend::InputModel {
friend class FrontEnd;
std::shared_ptr<TorchDecoder> m_model;
public:
explicit InputModel(std::shared_ptr<TorchDecoder> model) : m_model(model) {}
// TODO: pass telemetry extension to this ctor
};
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,136 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/frontend/exception.hpp"
#include "openvino/frontend/pytorch/decoder.hpp"
#include "openvino/opsets/opset10.hpp"
#include "openvino/util/log.hpp"
#include "pt_framework_node.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
Output<Node> NodeContext::get_tensor_from_model_or_create_input(size_t index) {
if (m_tensor_map->find(index) != m_tensor_map->end()) {
return m_tensor_map->at(index);
} else {
// nested subgraphs case
auto parameter = std::make_shared<opset10::Parameter>(element::dynamic, PartialShape::dynamic());
parameter->get_output_tensor(0).add_names({std::to_string(index)});
(*m_tensor_map)[index] = parameter;
m_external_parameters->push_back(parameter);
OPENVINO_DEBUG << "Nested case, created: " << parameter << '\n';
return parameter;
}
}
Output<Node> NodeContext::get_input_from_visible_context(size_t index) const {
FRONT_END_GENERAL_CHECK(index < get_input_size(), "Index is lower then number of inputs.");
auto input_tensor = get_input(static_cast<int>(index));
auto input_node = input_tensor.get_node_shared_ptr();
if (std::dynamic_pointer_cast<opset10::Parameter>(input_node)) {
// We need to look into external context for inputs that would be feed into this parameter
auto name = input_node->get_output_tensor(0).get_any_name();
size_t tensor_idx = (size_t)std::stoll(name);
if (m_ext_tensor_map.count(tensor_idx)) {
input_tensor = m_ext_tensor_map.at(tensor_idx);
}
}
return input_tensor;
}
std::shared_ptr<ov::Model> NodeContext::convert_subgraph(size_t index) {
auto subgraph_decoder = m_decoder->get_subgraph_decoder(index);
// Extend external context with internal tensors except Parameter nodes, because internal Parameters are created to
// link internal context with external
TensorMap ext_map(m_ext_tensor_map);
// map::insert does not update elements if their key is already in map; so if we have real tensors in outter scope
// we will not add Parameters we creeated in inner scope.
ext_map.insert(m_tensor_map->begin(), m_tensor_map->end());
auto model = convert_pytorch_model(subgraph_decoder, ext_map);
// Remove unused parameters, they could be created as inputs to the parts of graph that weren't
// used for generating output.
for (auto i = subgraph_decoder->inputs().size(); i < model->get_parameters().size(); i++) {
auto parameter = model->get_parameters()[i];
if (parameter->output(0).get_target_inputs().empty()) {
// There is no consumers: safe to remove
OPENVINO_DEBUG << "Removing parameter " << parameter
<< " in converted Pytorch model, because it is never used\n";
model->remove_parameter(parameter);
}
}
return model;
}
namespace {
std::shared_ptr<opset10::Constant> get_constant_at_input(const NodeContext& ctx, size_t index) {
FRONT_END_GENERAL_CHECK(!ctx.input_is_none(index), "Input with index: ", index, " is none.");
auto input_node = ctx.get_input_from_visible_context(index).get_node_shared_ptr();
auto input = std::dynamic_pointer_cast<opset10::Constant>(input_node);
FRONT_END_GENERAL_CHECK(input, "Input with index ", index, " cannot be interpreted as Constant: ", input_node);
return input;
}
} // namespace
template <>
std::vector<int64_t> NodeContext::const_input<std::vector<int64_t>>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<int64_t>();
}
template <>
ngraph::Strides NodeContext::const_input<ngraph::Strides>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<ngraph::Strides::value_type>();
}
template <>
ngraph::CoordinateDiff NodeContext::const_input<ngraph::CoordinateDiff>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<ngraph::CoordinateDiff::value_type>();
}
template <>
ngraph::Shape NodeContext::const_input<ngraph::Shape>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<ngraph::Shape::value_type>();
}
template <>
int64_t NodeContext::const_input<int64_t>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<int64_t>()[0];
}
template <>
bool NodeContext::const_input<bool>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<bool>()[0];
}
template <>
double NodeContext::const_input<double>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<double>()[0];
}
template <>
float NodeContext::const_input<float>(size_t index) const {
return get_constant_at_input(*this, index)->cast_vector<float>()[0];
}
template <>
std::string NodeContext::const_input<std::string>(size_t index) const {
FRONT_END_GENERAL_CHECK(!input_is_none(index), "Input with index: ", index, " is none.");
auto input_node = get_input_from_visible_context(index).get_node_shared_ptr();
auto input = std::dynamic_pointer_cast<PtFrameworkNode>(input_node);
FRONT_END_GENERAL_CHECK(input,
"Input node with index ",
index,
" cannot be interpreted as FrameworkNode with string constant: ",
input_node);
return input->get_decoder()->as_string();
}
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,38 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_adaptive_avg_pool3d(NodeContext& context) {
auto const_tile_params = context.mark_node(opset10::Constant::create(element::i32, Shape{5}, {1, 1, 1, 1, 1}));
auto const_0 = context.mark_node(opset10::Constant::create(element::i32, Shape{1}, {0}));
auto const_1 = context.mark_node(opset10::Constant::create(element::i32, Shape{1}, {1}));
auto const_neg_3 = context.mark_node(opset10::Constant::create(element::i32, Shape{1}, {-3}));
auto input_tensor = context.get_input(0);
auto given_shape = context.get_input(1);
auto input_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(input_tensor, element::i32));
auto shape_begin =
context.mark_node(std::make_shared<opset10::Slice>(input_shape, const_0, const_neg_3, const_1, const_0));
auto output_shape = context.mark_node(std::make_shared<opset10::Concat>(OutputVector{shape_begin, given_shape}, 0));
auto tile = context.mark_node(std::make_shared<opset10::Tile>(input_tensor, const_tile_params));
auto adaptive_avg_pool = context.mark_node(std::make_shared<opset10::AdaptiveAvgPool>(tile, given_shape));
auto reshape = context.mark_node(std::make_shared<opset10::Reshape>(adaptive_avg_pool, output_shape, false));
return {reshape};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_adaptive_max_pool2d(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
auto adaptive_max_pool = context.mark_node(std::make_shared<opset10::AdaptiveMaxPool>(x, y, ov::element::i32));
return {adaptive_max_pool->output(0), adaptive_max_pool->output(1)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,26 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_add(NodeContext& context) {
auto rhs = context.get_input(1);
if (!context.input_is_none(2)) {
auto converted_alpha = std::make_shared<opset10::ConvertLike>(context.get_input(2), rhs);
rhs = std::make_shared<opset10::Multiply>(converted_alpha, rhs);
}
return {context.mark_node(std::make_shared<opset10::Add>(context.get_input(0), rhs))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <climits>
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_addcmul(NodeContext& context) {
const auto eltwise_mult = std::make_shared<opset10::Multiply>(context.get_input(1), context.get_input(2));
const auto value = context.get_input(3);
const auto converted_value = std::make_shared<opset10::ConvertLike>(value, context.get_input(1));
const auto scalar_mult = std::make_shared<opset10::Multiply>(eltwise_mult, converted_value);
context.mark_nodes({eltwise_mult, converted_value, scalar_mult});
return {context.mark_node(std::make_shared<opset10::Add>(context.get_input(0), scalar_mult))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,31 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_addmm(NodeContext& context) {
auto input = context.get_input(0);
auto m1 = context.get_input(1);
auto m2 = context.get_input(2);
auto beta = context.get_input(3);
auto alpha = context.get_input(4);
auto beta_converted = context.mark_node(std::make_shared<opset10::ConvertLike>(beta, input));
auto mm = context.mark_node(std::make_shared<opset10::MatMul>(m1, m2));
auto alpha_converted = context.mark_node(std::make_shared<opset10::ConvertLike>(alpha, mm));
auto input_beta = context.mark_node(std::make_shared<opset10::Multiply>(input, beta_converted));
auto mm_alpha = context.mark_node(std::make_shared<opset10::Multiply>(mm, alpha_converted));
return {context.mark_node(std::make_shared<opset10::Add>(input_beta, mm_alpha))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,80 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_arange(NodeContext& context) {
auto zero = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {0}));
auto one = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {1}));
auto dtype = element::f32;
bool dtype_applied = false;
auto num_inputs = context.get_input_size();
ov::Output<Node> end;
ov::Output<Node> out_tensor;
ov::Output<Node> start = zero;
ov::Output<Node> step = one;
// aten::arange(Scalar end, tensor out)
if (num_inputs == 2) {
end = context.get_input(0);
out_tensor = context.input_is_none(1) ? end : context.get_input(1);
}
// aten::arange(Scalar start, Scalar end, Scalar step, Tensor out)
if (num_inputs == 4) {
start = context.get_input(0);
end = context.get_input(1);
step = context.get_input(2);
out_tensor = context.input_is_none(3) ? end : context.get_input(3);
}
// aten::arange(Scalar end, ScalarType dtype, Layout, Device, bool pin_memory)
if (num_inputs == 5) {
end = context.get_input(0);
out_tensor = end;
if (!context.input_is_none(1)) {
dtype = convert_dtype(context.const_input<int64_t>(1));
dtype_applied = true;
}
}
// aten::arange(Scalar start, Scalar end, ScalarType dtype, Layout, Device, bool pin_memory)
if (num_inputs == 6) {
start = context.get_input(0);
end = context.get_input(1);
out_tensor = end;
if (!context.input_is_none(2)) {
dtype = convert_dtype(context.const_input<int64_t>(2));
dtype_applied = true;
}
}
// aten::arange(Scalar start, Scalar end, Scalar step, ScalarType dtype, Layout, Device, bool pin_memory)
if (num_inputs == 7) {
start = context.get_input(0);
end = context.get_input(1);
step = context.get_input(2);
out_tensor = end;
if (!context.input_is_none(3)) {
dtype = convert_dtype(context.const_input<int64_t>(3));
dtype_applied = true;
}
}
auto r_end = context.mark_node(std::make_shared<opset10::Convert>(end, dtype));
auto r_start = context.mark_node(std::make_shared<opset10::Convert>(start, dtype));
auto r_step = context.mark_node(std::make_shared<opset10::Convert>(step, dtype));
auto range = context.mark_node(std::make_shared<opset10::Range>(r_start, r_end, r_step, dtype));
if (!dtype_applied) {
range = context.mark_node(std::make_shared<opset10::ConvertLike>(range, out_tensor));
}
return {range};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,39 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "pt_framework_node.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_as_tensor(NodeContext& context) {
auto dtype = element::f32;
Output<Node> cast;
if (!context.input_is_none(1)) {
auto dtype_ext_node = context.get_input_from_visible_context(1).get_node_shared_ptr();
auto dtype_fw_node = std::dynamic_pointer_cast<PtFrameworkNode>(dtype_ext_node);
if (dtype_fw_node && dtype_fw_node->get_op_type() == "prim::dtype") {
auto type_input = dtype_fw_node->input_value(0);
return {context.mark_node(std::make_shared<opset10::ConvertLike>(context.get_input(0), type_input))};
}
if (auto dtype_const = std::dynamic_pointer_cast<opset10::Constant>(dtype_ext_node)) {
auto pt_type = dtype_const->cast_vector<int64_t>()[0];
dtype = convert_dtype(pt_type);
}
}
cast = context.mark_node(std::make_shared<opset10::Convert>(context.get_input(0), dtype));
// Input with index 2 is device, we skip this input
return {cast};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,50 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_avg_poolnd(NodeContext& context) {
auto input = context.get_input(0);
auto kernel = context.const_input<Shape>(1);
auto strides = context.const_input<Strides>(2);
auto pads = context.const_input<Shape>(3); // pytorch supports only symmetric padding
auto rounding_type = context.const_input<bool>(4) ? ov::op::RoundingType::CEIL : ov::op::RoundingType::FLOOR;
auto count_include_pad = context.const_input<bool>(5);
FRONT_END_OP_CONVERSION_CHECK(context.input_is_none(6),
"Translation for aten::avg_pool2d do not support divisor_override input.");
// Although ov::AvgPool provides exclude_pad=false,
// The corner case of Average Pooling with ceil_mode on
// PyTorch allows sliding window go off bound, which leads to this accommodation.
// More detail on https://github.com/pytorch/pytorch/issues/57178
if (count_include_pad) {
auto zero = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
auto zero_i32 = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {0}));
auto shape = context.mark_node(std::make_shared<opset10::ShapeOf>(input, element::i32));
auto rank = context.mark_node(std::make_shared<opset10::ShapeOf>(shape, element::i32));
auto pad_values = context.get_input(3);
auto pads_len = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {pads.size()}));
auto pads_diff = context.mark_node(std::make_shared<opset10::Subtract>(rank, pads_len));
auto pads_remaining = context.mark_node(std::make_shared<opset10::Broadcast>(zero_i32, pads_diff));
auto padding = context.mark_node(
std::make_shared<opset10::Concat>(NodeVector{pads_remaining, pad_values.get_node_shared_ptr()}, 0));
input =
context.mark_node(std::make_shared<opset10::Pad>(input, padding, padding, zero, ov::op::PadMode::CONSTANT));
pads = Shape(pads.size(), 0);
}
return {context.mark_node(
std::make_shared<opset10::AvgPool>(input, strides, pads, pads, kernel, !count_include_pad, rounding_type))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,57 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
namespace {
Output<Node> broadcast_const_to_channel_dim(NodeContext& context, Output<Node> input, Output<Node> value) {
auto input_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(input));
auto zero_i = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {0}));
auto one_i = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {1}));
auto channel_dim = context.mark_node(std::make_shared<opset10::Gather>(input_shape, one_i, zero_i));
auto channel_dim_exp = context.mark_node(std::make_shared<opset10::Unsqueeze>(channel_dim, zero_i));
return context.mark_node(std::make_shared<opset10::Broadcast>(value, channel_dim_exp));
}
} // namespace
OutputVector translate_batch_norm(NodeContext& context) {
// Schema: aten::batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var,
// bool training, float momentum, float eps, bool cudnn_enabled) -> Tensor
auto input = context.get_input(0);
Output<Node> weight;
Output<Node> bias;
if (!context.input_is_none(1)) {
weight = context.get_input(1);
} else {
auto one_f = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {1}));
weight = broadcast_const_to_channel_dim(context, input, one_f);
}
if (!context.input_is_none(2)) {
bias = context.get_input(2);
} else {
auto zero_f = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
bias = broadcast_const_to_channel_dim(context, input, zero_f);
}
// index 3 running_mean and index 4 running_var can be none for training case only, check that not training before
auto training = context.const_input<bool>(5);
FRONT_END_OP_CONVERSION_CHECK(!training, "Translation for aten::batch_norm do not support training mode.");
auto running_mean = context.get_input(3);
auto running_var = context.get_input(4);
// Index with index 6 is momentum, it is used only in training mode
auto epsilon = context.const_input<float>(7);
return {context.mark_node(
std::make_shared<opset10::BatchNormInference>(input, weight, bias, running_mean, running_var, epsilon))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,32 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_clamp(NodeContext& context) {
auto x = context.get_input(0);
if (!context.input_is_none(1)) {
auto min_clip = context.get_input(1);
min_clip = context.mark_node(std::make_shared<opset10::ConvertLike>(min_clip, x));
x = context.mark_node(std::make_shared<opset10::Maximum>(x, min_clip));
}
if (!context.input_is_none(2)) {
auto max_clip = context.get_input(2);
max_clip = context.mark_node(std::make_shared<opset10::ConvertLike>(max_clip, x));
x = context.mark_node(std::make_shared<opset10::Minimum>(x, max_clip));
}
return {x};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,21 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_constant(NodeContext& context) {
return context.as_constant();
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,62 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_convnd(NodeContext& context) {
auto strides = context.const_input<Strides>(3);
// In torch pads at beginning are same as at end
auto pads = CoordinateDiff(strides.size(), 0);
auto pad_type = ov::op::PadType::EXPLICIT;
try {
auto pad_mode = context.const_input<std::string>(4);
pad_type = convert_pad(pad_mode);
} catch (ov::frontend::GeneralFailure) {
pads = context.const_input<CoordinateDiff>(4);
}
auto dilations = context.const_input<Strides>(5);
auto groups = context.const_input<int64_t>(6);
std::shared_ptr<ov::Node> conv;
if (groups == 1) {
conv = std::make_shared<opset10::Convolution>(context.get_input(0),
context.get_input(1),
strides,
pads,
pads,
dilations,
pad_type);
} else {
conv = std::make_shared<opset10::GroupConvolution>(
context.get_input(0),
reshape_kernel_for_group(context, context.get_input(0), context.get_input(1), groups),
strides,
pads,
pads,
dilations,
pad_type);
}
if (!context.input_is_none(2)) {
auto bias = context.get_input(2);
auto bias_rank = bias.get_partial_shape().rank();
if (bias_rank == 1) {
bias = reshape_conv_bias(context, bias, conv);
}
conv = context.mark_node(std::make_shared<opset10::Add>(conv, bias));
}
return {conv};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,84 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_convolution(NodeContext& context) {
// Schema: aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[]
// dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool
// cudnn_enabled, bool allow_tf32) -> Tensor
auto strides = context.const_input<Strides>(3);
auto pads = context.const_input<CoordinateDiff>(4);
auto dilations = context.const_input<Strides>(5);
bool transposed = context.const_input<bool>(6);
auto output_padding = context.const_input<CoordinateDiff>(7);
auto groups = context.const_input<int64_t>(8);
std::shared_ptr<ov::Node> conv;
if (groups == 1) {
if (!transposed) {
conv = context.mark_node(std::make_shared<opset10::Convolution>(context.get_input(0),
context.get_input(1),
strides,
pads,
pads,
dilations));
} else {
conv = context.mark_node(std::make_shared<opset10::ConvolutionBackpropData>(context.get_input(0),
context.get_input(1),
strides,
pads,
pads,
dilations,
ov::op::PadType::EXPLICIT,
output_padding));
}
} else {
if (!transposed) {
conv = context.mark_node(std::make_shared<opset10::GroupConvolution>(
context.get_input(0),
context.mark_output(
reshape_kernel_for_group(context, context.get_input(0), context.get_input(1), groups)),
strides,
pads,
pads,
dilations));
} else {
conv = context.mark_node(std::make_shared<opset10::GroupConvolutionBackpropData>(
context.get_input(0),
context.mark_output(
reshape_kernel_for_group(context, context.get_input(0), context.get_input(1), groups)),
strides,
pads,
pads,
dilations,
ov::op::PadType::EXPLICIT,
output_padding));
}
}
if (!context.input_is_none(2)) {
auto bias = context.get_input(2);
auto bias_rank = bias.get_partial_shape().rank();
if (bias_rank == 1) {
bias = reshape_conv_bias(context, bias, conv);
}
conv = context.mark_node(std::make_shared<opset10::Add>(conv, bias));
}
return {context.mark_output(conv)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,60 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_convolution_mode(NodeContext& context) {
// Schema: aten::_convolution_mode(Tensor input, Tensor weight, Tensor? bias, int[] stride, str padding, int[]
// dilation, int groups) -> Tensor
auto strides = context.const_input<Strides>(3);
auto pad_mode = context.const_input<std::string>(4);
auto dilations = context.const_input<Strides>(5);
auto groups = context.const_input<int64_t>(6);
auto pad_const = CoordinateDiff(strides.size(), 0);
auto auto_pad_mode = convert_pad(pad_mode);
std::shared_ptr<ov::Node> conv;
if (groups == 1) {
conv = context.mark_node(std::make_shared<opset10::Convolution>(context.get_input(0),
context.get_input(1),
strides,
pad_const,
pad_const,
dilations,
auto_pad_mode));
} else {
conv = context.mark_node(std::make_shared<opset10::GroupConvolution>(
context.get_input(0),
context.mark_output(reshape_kernel_for_group(context, context.get_input(0), context.get_input(1), groups)),
strides,
pad_const,
pad_const,
dilations,
auto_pad_mode));
}
if (!context.input_is_none(2)) {
auto bias = context.get_input(2);
auto bias_rank = bias.get_partial_shape().rank();
if (bias_rank == 1) {
bias = reshape_conv_bias(context, bias, conv);
}
conv = context.mark_node(std::make_shared<opset10::Add>(conv, bias));
}
return {context.mark_output(conv)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,25 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_dim(NodeContext& context) {
auto shape = std::make_shared<opset10::ShapeOf>(context.get_input(0), element::i32);
auto rank = std::make_shared<opset10::ShapeOf>(shape, element::i32);
auto squeeze = std::make_shared<opset10::Squeeze>(rank);
context.mark_nodes({shape, rank, squeeze});
return squeeze->outputs();
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,38 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_div(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
auto res = context.mark_node(std::make_shared<opset10::Divide>(x, y, true));
if (!context.input_is_none(2)) {
auto rounding_mode = context.const_input<std::string>(2);
if (rounding_mode == "floor") {
res = context.mark_node(std::make_shared<opset10::Floor>(res));
} else if (rounding_mode == "trunc") {
const auto convert = context.mark_node(std::make_shared<opset10::Convert>(res, element::i64));
res = context.mark_node(std::make_shared<opset10::ConvertLike>(convert, x));
} else {
FRONT_END_OP_CONVERSION_CHECK(false,
"Openvino Pytorch Frontend doesn't support rounding mode ",
rounding_mode,
" for aten::div");
}
}
return {res};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,23 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_elu(NodeContext& context) {
auto x = context.get_input(0);
auto alpha = context.const_input<float>(1);
return {context.mark_node(std::make_shared<opset10::Elu>(x, alpha))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_embedding(NodeContext& context) {
auto data = context.get_input(0);
auto indices = context.get_input(1);
// TODO: find out the meaning of input idx 2
FRONT_END_OP_CONVERSION_CHECK(
context.const_input<bool>(3) == false && context.const_input<bool>(4) == false,
"Only False is supported on inputs with indexes 3 and 4 for aten::embedding translation");
auto axis_0 = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {0}));
return {context.mark_node(std::make_shared<opset10::Gather>(data, indices, axis_0))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,43 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
namespace {
OutputVector base_expand(NodeContext& context, ov::Output<ov::Node> x, ov::Output<ov::Node> sizes) {
auto one = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {1}));
auto sizes_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(sizes, element::i32));
auto neg_one = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {-1}));
auto neg_ones = context.mark_node(std::make_shared<opset10::Broadcast>(neg_one, sizes_shape));
auto ones = context.mark_node(std::make_shared<opset10::Broadcast>(one, sizes_shape));
auto neg_sizes = context.mark_node(std::make_shared<opset10::Equal>(sizes, neg_ones));
auto shape = context.mark_node(std::make_shared<opset10::Select>(neg_sizes, ones, sizes));
return {std::make_shared<opset10::Broadcast>(x, shape, ov::op::BroadcastType::BIDIRECTIONAL)};
};
} // namespace
OutputVector translate_expand(NodeContext& context) {
auto x = context.get_input(0);
auto sizes = context.get_input(1);
return base_expand(context, x, sizes);
};
OutputVector translate_expand_as(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
auto sizes = context.mark_node(std::make_shared<opset10::ShapeOf>(y, element::i32));
return base_expand(context, x, sizes);
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,64 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_flatten(NodeContext& context) {
auto start_dim = context.const_input<int64_t>(1);
auto end_dim = context.const_input<int64_t>(2);
auto shape = std::make_shared<opset10::ShapeOf>(context.get_input(0), element::i32);
auto rank_ = std::make_shared<opset10::ShapeOf>(shape, element::i32);
auto rank = std::make_shared<opset10::Squeeze>(rank_);
// Use opset::If for dim normalization
auto start_dim_node = context.get_input(1);
auto end_dim_node = context.get_input(2);
if (start_dim < 0) {
start_dim_node = std::make_shared<opset10::Add>(rank, start_dim_node);
}
if (end_dim < 0) {
end_dim_node = std::make_shared<opset10::Add>(rank, end_dim_node);
}
auto delta = std::make_shared<opset10::Subtract>(end_dim_node, start_dim_node);
auto rank_delta = std::make_shared<opset10::Subtract>(rank, delta);
auto true_const0 = opset10::Constant::create(element::boolean, Shape{}, {1});
auto zeros_loop = std::make_shared<opset10::Loop>(rank_delta, true_const0);
auto true_const = opset10::Constant::create(element::boolean, Shape{}, {1});
auto result_true = std::make_shared<opset10::Result>(true_const);
auto zero_const = opset10::Constant::create(element::i32, Shape{1}, {0});
auto result_zero = std::make_shared<opset10::Result>(zero_const);
auto f = std::make_shared<ov::Model>(ResultVector{result_true, result_zero}, ParameterVector{});
zeros_loop->set_function(f);
zeros_loop->set_special_body_ports({-1, 0});
auto zeros = zeros_loop->get_concatenated_slices(result_zero, 0, 1, 1, -1, 0);
auto neg_1_const = opset10::Constant::create(element::i32, Shape{1}, {-1});
auto axis_0 = opset10::Constant::create(element::i32, Shape{1}, {0});
auto start_dim_node_ = std::make_shared<opset10::Unsqueeze>(start_dim_node, axis_0);
auto new_shape = std::make_shared<opset10::ScatterElementsUpdate>(zeros, start_dim_node_, neg_1_const, axis_0);
context.mark_nodes({shape,
rank_,
rank,
delta,
rank_delta,
true_const0,
zeros_loop,
neg_1_const,
axis_0,
start_dim_node_,
new_shape});
return {context.mark_node(std::make_shared<opset10::Reshape>(context.get_input(0), new_shape, true))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_floor_divide(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
auto div = context.mark_node(std::make_shared<opset10::Divide>(x, y, true));
return {context.mark_node(std::make_shared<opset10::Floor>(div))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,23 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_floordiv(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
return {context.mark_node(std::make_shared<opset10::Divide>(x, y, true))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,154 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
namespace {
ov::Output<Node> base_translate_full(NodeContext& context, ov::Output<Node> sizes, ov::Output<Node> value) {
return context.mark_node(std::make_shared<opset10::Broadcast>(value, sizes));
}
ov::Output<Node> base_translate_full_with_convert(NodeContext& context,
ov::Output<Node> sizes,
ov::Output<Node> value,
size_t dtype_id) {
auto filled_tensor = base_translate_full(context, sizes, value);
if (!context.input_is_none(dtype_id)) {
auto dtype = convert_dtype(context.const_input<int64_t>(dtype_id));
filled_tensor = context.mark_node(std::make_shared<opset10::Convert>(filled_tensor, dtype));
}
return filled_tensor;
}
ov::Output<Node> base_translate_full_with_convertlike(NodeContext& context,
ov::Output<Node> sizes,
ov::Output<Node> value,
ov::Output<Node> out) {
auto filled_tensor = base_translate_full(context, sizes, value);
return context.mark_node(std::make_shared<opset10::ConvertLike>(filled_tensor, out));
}
} // namespace
OutputVector translate_full(NodeContext& context) {
auto sizes = context.get_input(0);
auto value = context.get_input(1);
auto num_inputs = context.get_input_size();
if (num_inputs < 6) {
int out_id = num_inputs == 3 ? 2 : 3;
if (!context.input_is_none(static_cast<size_t>(out_id))) {
auto out = context.get_input(out_id);
return {base_translate_full_with_convertlike(context, sizes, value, out)};
}
return {base_translate_full(context, sizes, value)};
}
size_t dtype_id = num_inputs == 6 ? 2 : 3;
return {base_translate_full_with_convert(context, sizes, value, dtype_id)};
};
OutputVector translate_full_like(NodeContext& context) {
auto input = context.get_input(0);
auto value = context.get_input(1);
auto sizes = context.mark_node(std::make_shared<opset10::ShapeOf>(input));
if (context.get_input_size() == 7) {
return {base_translate_full_with_convert(context, sizes, value, 2)};
}
auto out = context.input_is_none(3) ? input : context.get_input(3);
return {base_translate_full_with_convertlike(context, sizes, value, out)};
};
OutputVector translate_new_full(NodeContext& context) {
auto input = context.get_input(0);
auto sizes = context.get_input(1);
auto value = context.get_input(2);
if (context.get_input_size() == 7 && !context.input_is_none(3)) {
return {base_translate_full_with_convert(context, sizes, value, 3)};
}
return {base_translate_full_with_convertlike(context, sizes, value, input)};
};
OutputVector translate_zeros(NodeContext& context) {
auto sizes = context.get_input(0);
auto value = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
auto num_inputs = context.get_input_size();
if (num_inputs < 5) {
int out_id = num_inputs == 2 ? 1 : 2;
if (!context.input_is_none(static_cast<size_t>(out_id))) {
auto out = context.get_input(out_id);
return {base_translate_full_with_convertlike(context, sizes, value, out)};
}
return {base_translate_full(context, sizes, value)};
}
size_t dtype_id = num_inputs == 5 ? 1 : 2;
return {base_translate_full_with_convert(context, sizes, value, dtype_id)};
};
OutputVector translate_zeros_like(NodeContext& context) {
auto input = context.get_input(0);
auto value = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
auto sizes = context.mark_node(std::make_shared<opset10::ShapeOf>(input));
if (context.get_input_size() == 6) {
return {base_translate_full_with_convert(context, sizes, value, 1)};
}
auto out = context.input_is_none(2) ? input : context.get_input(2);
return {base_translate_full_with_convertlike(context, sizes, value, out)};
};
OutputVector translate_new_zeros(NodeContext& context) {
auto input = context.get_input(0);
auto sizes = context.get_input(1);
auto value = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
if (context.get_input_size() == 6 && !context.input_is_none(2)) {
return {base_translate_full_with_convert(context, sizes, value, 2)};
}
return {base_translate_full_with_convertlike(context, sizes, value, input)};
};
OutputVector translate_ones(NodeContext& context) {
auto sizes = context.get_input(0);
auto value = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {1}));
auto num_inputs = context.get_input_size();
if (num_inputs < 5) {
int out_id = num_inputs == 2 ? 1 : 2;
if (!context.input_is_none(static_cast<size_t>(out_id))) {
auto out = context.get_input(out_id);
return {base_translate_full_with_convertlike(context, sizes, value, out)};
}
return {base_translate_full(context, sizes, value)};
}
size_t dtype_id = num_inputs == 5 ? 1 : 2;
return {base_translate_full_with_convert(context, sizes, value, dtype_id)};
};
OutputVector translate_ones_like(NodeContext& context) {
auto input = context.get_input(0);
auto value = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {1}));
auto sizes = context.mark_node(std::make_shared<opset10::ShapeOf>(input));
if (context.get_input_size() == 6) {
return {base_translate_full_with_convert(context, sizes, value, 1)};
}
auto out = context.input_is_none(2) ? input : context.get_input(2);
return {base_translate_full_with_convertlike(context, sizes, value, out)};
};
OutputVector translate_new_ones(NodeContext& context) {
auto input = context.get_input(0);
auto sizes = context.get_input(1);
auto value = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {1}));
if (context.get_input_size() == 6 && !context.input_is_none(2)) {
return {base_translate_full_with_convert(context, sizes, value, 2)};
}
return {base_translate_full_with_convertlike(context, sizes, value, input)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,25 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_gelu(NodeContext& context) {
auto x = context.get_input(0);
auto approximate = context.const_input<std::string>(1);
// TODO: Add support for "tanh" approximate
FRONT_END_OP_CONVERSION_CHECK(approximate == "none", "Unsupported approximate for Gelu: ", approximate);
return {context.mark_node(std::make_shared<opset10::Gelu>(x))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "pt_framework_node.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_get_attr(NodeContext& context) {
auto res = context.get_decoder()->try_decode_get_attr();
FRONT_END_OP_CONVERSION_CHECK(res.size() > 0, "GetAttr must have at least one output.");
return res;
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,49 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_group_norm(NodeContext& context) {
auto data = context.get_input(0);
auto num_groups = context.const_input<int64_t>(1);
// input 2 - weights and input 3 - bias are optional without default value, we handle them later
auto eps = static_cast<float>(context.const_input<double>(4));
auto input_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(data, element::i64));
auto scalar_one = context.mark_node(opset10::Constant::create(element::i64, {}, {1}));
auto shape = context.mark_node(
std::make_shared<opset10::Constant>(element::i64, Shape({3}), std::vector<int64_t>{0, num_groups, -1}));
auto reshaped_input = context.mark_node(std::make_shared<opset10::Reshape>(data, shape, true));
auto reduction_axes =
context.mark_node(opset10::Constant::create(element::i64, Shape({1}), std::vector<int64_t>(1, 2)));
auto reshaped_norm = context.mark_node(
std::make_shared<opset10::MVN>(reshaped_input, reduction_axes, true, eps, ov::op::MVNEpsMode::INSIDE_SQRT));
auto norm = context.mark_node(std::make_shared<opset10::Reshape>(reshaped_norm, input_shape, true));
auto input_rank2d = context.mark_node(std::make_shared<opset10::ShapeOf>(input_shape, element::i64));
auto input_rank = context.mark_node(std::make_shared<opset10::Squeeze>(input_rank2d));
auto skip_last = context.mark_node(std::make_shared<opset10::Subtract>(input_rank, scalar_one));
auto axes = context.mark_node(std::make_shared<opset10::Range>(scalar_one, skip_last, scalar_one, element::i64));
if (!context.input_is_none(2)) {
auto weights = context.get_input(2);
weights = context.mark_node(std::make_shared<opset10::Unsqueeze>(weights, axes));
norm = context.mark_node(std::make_shared<opset10::Multiply>(norm, weights));
}
if (!context.input_is_none(3)) {
auto bias = context.get_input(3);
bias = context.mark_node(std::make_shared<opset10::Unsqueeze>(bias, axes));
norm = context.mark_node(std::make_shared<opset10::Add>(norm, bias));
}
return {norm};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,29 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_hardtanh(NodeContext& context) {
float min = -1;
float max = 1;
if (!context.input_is_none(1)) {
min = context.const_input<float>(1);
}
if (!context.input_is_none(2)) {
max = context.const_input<float>(2);
}
return {context.mark_node(std::make_shared<opset10::Clamp>(context.get_input(0), min, max))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,152 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "openvino/util/log.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_if(NodeContext& context) {
auto if_node = std::make_shared<opset10::If>(context.get_input(0));
context.mark_node(if_node);
auto decoder = context.get_decoder();
FRONT_END_OP_CONVERSION_CHECK(decoder->get_subgraph_size() == 2, "If must have 2 subgraphs.");
auto then_decoder = decoder->get_subgraph_decoder(0);
auto then_body = context.convert_subgraph(0);
if_node->set_then_body(then_body);
auto then_inputs = then_decoder->inputs();
auto else_decoder = decoder->get_subgraph_decoder(1);
auto else_body = context.convert_subgraph(1);
if_node->set_else_body(else_body);
auto else_inputs = else_decoder->inputs();
std::set<size_t> input_idxs;
input_idxs.insert(then_inputs.begin(), then_inputs.end());
input_idxs.insert(else_inputs.begin(), else_inputs.end());
std::map<size_t, ParameterVector> inputs_map;
std::map<size_t, ResultVector> outputs_map;
for (const auto& param : then_body->get_parameters()) {
auto name = param->get_output_tensor(0).get_any_name();
size_t input_idx = (size_t)std::stoll(name);
FRONT_END_OP_CONVERSION_CHECK(inputs_map.count(input_idx) == 0,
"More than one then_body input with same tensor name: ",
input_idx,
"; existing: ",
inputs_map.at(input_idx)[0],
" adding: ",
param);
inputs_map[input_idx] = {param, nullptr};
}
for (const auto& param : else_body->get_parameters()) {
auto name = param->get_output_tensor(0).get_any_name();
size_t input_idx = (size_t)std::stoll(name);
if (inputs_map.count(input_idx)) {
inputs_map[input_idx][1] = param;
} else {
inputs_map[input_idx] = {nullptr, param};
}
}
OutputVector res;
const auto num_outs = context.num_of_outputs();
const auto then_results = then_body->get_results();
const auto else_results = else_body->get_results();
FRONT_END_OP_CONVERSION_CHECK(then_results.size() >= num_outs && else_results.size() >= num_outs,
"Else or then body have less outputs than prim::If requires.");
for (int i = 0; i < num_outs; i++) {
res.push_back(if_node->set_output(then_results[i], else_results[i]));
}
// Each body can have mutated outputs that are not included into pytorch node outputs.
std::map<size_t, std::shared_ptr<opset10::Result>> extra_then_body_results;
std::map<size_t, std::shared_ptr<opset10::Result>> extra_else_body_results;
std::set<size_t> extra_output_idxs;
for (int i = num_outs; i < then_results.size(); i++) {
const auto result = then_results[i];
const auto name = result->input(0).get_tensor().get_any_name();
size_t output_idx = (size_t)std::stoll(name);
FRONT_END_OP_CONVERSION_CHECK(extra_then_body_results.count(output_idx) == 0,
"More than one then_body output with same tensor name: ",
output_idx,
"; existing: ",
extra_then_body_results.at(output_idx),
" adding: ",
result);
extra_then_body_results[output_idx] = result;
extra_output_idxs.insert(output_idx);
}
for (int i = num_outs; i < else_results.size(); i++) {
const auto result = else_results[i];
const auto name = result->input(0).get_tensor().get_any_name();
size_t output_idx = (size_t)std::stoll(name);
FRONT_END_OP_CONVERSION_CHECK(extra_else_body_results.count(output_idx) == 0,
"More than one else_body output with same tensor name: ",
output_idx,
"; existing: ",
extra_else_body_results.at(output_idx),
" adding: ",
result);
extra_else_body_results[output_idx] = result;
extra_output_idxs.insert(output_idx);
}
// Each extra output may not have same extra output in the other body, so we need to create Parameter->Result
// pattern in the body.
for (const auto& output_idx : extra_output_idxs) {
if (!extra_then_body_results.count(output_idx)) {
// Need to add Parameter->Result construction in then body
auto new_parameter = std::make_shared<opset10::Parameter>(element::dynamic, PartialShape::dynamic());
new_parameter->get_output_tensor(0).add_names({std::to_string(output_idx)});
auto new_result = std::make_shared<opset10::Result>(new_parameter);
then_body->add_parameters({new_parameter});
then_body->add_results({new_result});
then_body->validate_nodes_and_infer_types();
FRONT_END_OP_CONVERSION_CHECK(inputs_map.count(output_idx), "Input must exist in else body");
inputs_map[output_idx][0] = new_parameter;
extra_then_body_results[output_idx] = new_result;
OPENVINO_DEBUG << "Modified then body: " << if_node << '\n';
} else if (!extra_else_body_results.count(output_idx)) {
// Need to add Parameter->Result construction in else body
auto new_parameter = std::make_shared<opset10::Parameter>(element::dynamic, PartialShape::dynamic());
new_parameter->get_output_tensor(0).add_names({std::to_string(output_idx)});
auto new_result = std::make_shared<opset10::Result>(new_parameter);
else_body->add_parameters({new_parameter});
else_body->add_results({new_result});
else_body->validate_nodes_and_infer_types();
FRONT_END_OP_CONVERSION_CHECK(inputs_map.count(output_idx), "Input must exist in then body");
inputs_map[output_idx][1] = new_parameter;
extra_else_body_results[output_idx] = new_result;
OPENVINO_DEBUG << "Modified else body: " << if_node << '\n';
}
}
// Create prim::If inputs and outputs
for (const auto& input : inputs_map) {
if (!input_idxs.count(input.first)) {
auto external_output = context.get_tensor_from_model_or_create_input(input.first);
if_node->set_input(external_output, input.second[0], input.second[1]);
} else {
auto external_output = context.get_tensor_from_model(input.first);
if (external_output.get_node()) {
if_node->set_input(external_output, input.second[0], input.second[1]);
}
}
}
for (const auto& output_idx : extra_output_idxs) {
context.add_tensor_to_context(
output_idx,
if_node->set_output(extra_then_body_results.at(output_idx), extra_else_body_results.at(output_idx)));
}
if_node->validate_and_infer_types();
return res;
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,96 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
namespace {
std::shared_ptr<Node> get_im2col_indices_along_dim(NodeContext& context,
ov::Output<Node> input_d,
int64_t kernel_size_d,
int64_t dilation_d,
int64_t padding_d,
int64_t stride_d) {
auto zero = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {0}));
auto minus_one = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {-1}));
auto kernel_size = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {kernel_size_d}));
auto padding_2 = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {padding_d * 2}));
auto stride = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {stride_d}));
auto input_d_squeezed = context.mark_node(std::make_shared<opset10::Squeeze>(input_d, zero));
auto blocks_d = context.mark_node(std::make_shared<opset10::Add>(input_d_squeezed, padding_2));
auto subtrahend =
context.mark_node(opset10::Constant::create(element::i64, Shape{}, {dilation_d * (kernel_size_d - 1)}));
blocks_d = context.mark_node(std::make_shared<opset10::Subtract>(blocks_d, subtrahend));
auto blocks_d_indices = context.mark_node(std::make_shared<opset10::Range>(zero, blocks_d, stride, element::i64));
blocks_d_indices = context.mark_node(std::make_shared<opset10::Unsqueeze>(blocks_d_indices, zero));
std::vector<int64_t> rng;
for (int64_t i = 0; i < kernel_size_d * dilation_d; i += dilation_d) {
rng.push_back(i);
}
auto kernel_grid = context.mark_node(opset10::Constant::create(element::i64, Shape{rng.size()}, rng));
auto kernel_mask = context.mark_node(std::make_shared<opset10::Unsqueeze>(kernel_grid, minus_one));
return context.mark_node(std::make_shared<opset10::Add>(blocks_d_indices, kernel_mask));
}
} // namespace
OutputVector translate_im2col(NodeContext& context) {
auto input = context.get_input(0);
auto kernel_size = context.const_input<std::vector<int64_t>>(1);
FRONT_END_OP_CONVERSION_CHECK(kernel_size.size() == 2, "kernel size should contains 2 elements");
auto dilation = context.const_input<std::vector<int64_t>>(2);
FRONT_END_OP_CONVERSION_CHECK(kernel_size.size() == 2, "dilation should contains 2 elements");
auto padding = context.const_input<std::vector<int64_t>>(3);
FRONT_END_OP_CONVERSION_CHECK(kernel_size.size() == 2, "padding should contains 2 elements");
auto stride = context.const_input<std::vector<int64_t>>(4);
FRONT_END_OP_CONVERSION_CHECK(kernel_size.size() == 2, "stride should contains 2 elements");
auto zero = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {0}));
auto input_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(input));
auto zero_f = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
auto minus_one = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {-1}));
auto two = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {2}));
auto four = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {4}));
auto input_shape_split = context.mark_node(std::make_shared<opset10::Split>(input_shape, zero, 4));
auto input_b = input_shape_split->output(0);
auto input_c = input_shape_split->output(1);
auto input_h = input_shape_split->output(2);
auto input_w = input_shape_split->output(3);
auto stride_h = stride[0];
auto stride_w = stride[1];
auto padding_h = padding[0];
auto padding_w = padding[1];
auto dilation_h = dilation[0];
auto dilation_w = dilation[1];
auto kernel_h = kernel_size[0];
auto kernel_w = kernel_size[1];
auto blocks_row_indices = get_im2col_indices_along_dim(context, input_h, kernel_h, dilation_h, padding_h, stride_h);
auto blocks_col_indices = get_im2col_indices_along_dim(context, input_w, kernel_w, dilation_w, padding_w, stride_w);
auto kernel_window = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {kernel_h * kernel_w}));
auto input_c_squeezed = context.mark_node(std::make_shared<opset10::Squeeze>(input_c, zero));
auto channel_unfolded = context.mark_node(std::make_shared<opset10::Multiply>(input_c_squeezed, kernel_window));
auto channel_unfolded_unsqueezed = context.mark_node(std::make_shared<opset10::Unsqueeze>(channel_unfolded, zero));
auto output_shape = context.mark_node(
std::make_shared<opset10::Concat>(OutputVector{input_b, channel_unfolded_unsqueezed, minus_one}, 0));
auto pads = context.mark_node(
opset10::Constant::create(element::i64, Shape{4}, std::vector<int64_t>{0, 0, padding_h, padding_w}));
auto padded_input =
context.mark_node(std::make_shared<opset10::Pad>(input, pads, pads, zero_f, ov::op::PadMode::CONSTANT));
auto output = context.mark_node(std::make_shared<opset10::Gather>(padded_input, blocks_row_indices, two));
output = context.mark_node(std::make_shared<opset10::Gather>(output, blocks_col_indices, four));
auto permutation_dims =
context.mark_node(opset10::Constant::create(element::i64, Shape{6}, std::vector<int64_t>{0, 1, 2, 4, 3, 5}));
output = context.mark_node(std::make_shared<opset10::Transpose>(output, permutation_dims));
return {context.mark_node(std::make_shared<opset10::Reshape>(output, output_shape, false))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,21 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_int(NodeContext& context) {
return {context.mark_node(std::make_shared<opset10::Convert>(context.get_input(0), element::i64))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,36 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_layer_norm(NodeContext& context) {
auto eps = context.const_input<float>(4);
auto normalized_shape = context.const_input<Shape>(1);
FRONT_END_OP_CONVERSION_CHECK(normalized_shape.size() == 1,
"Translation for aten::layer_norm supports only single normalized_shape value, "
"which means normalizing over the last dimension.");
// TODO: support any dimention
auto axes = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {-1}));
auto out_node = context.mark_node(
std::make_shared<opset10::MVN>(context.get_input(0), axes, true, eps, ov::op::MVNEpsMode::INSIDE_SQRT));
if (!context.input_is_none(2)) {
out_node = context.mark_node(std::make_shared<opset10::Multiply>(out_node, context.get_input(2)));
}
if (!context.input_is_none(3)) {
out_node = context.mark_node(std::make_shared<opset10::Add>(out_node, context.get_input(3)));
}
return {out_node};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_len(NodeContext& context) {
auto const_0 = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {0}));
auto const_1 = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {1}));
auto input = context.get_input(0);
auto input_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(input, element::i64));
auto slice = context.mark_node(std::make_shared<opset10::Slice>(input_shape, const_0, const_1, const_1));
auto squeeze = std::make_shared<opset10::Squeeze>(slice, const_0);
return {context.mark_node(squeeze)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_linear(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
auto matmul = std::make_shared<opset10::MatMul>(x, y, false, true);
return {context.mark_output(make_optional_bias(matmul, context, 2))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,40 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_list_construct(NodeContext& context) {
// Process the case when prim::ListConstruct has all inputs constant
ov::OutputVector consts;
for (int i = 0; i < context.get_input_size(); i++) {
auto input = context.get_input_from_visible_context(i);
auto c_node = std::dynamic_pointer_cast<opset10::Constant>(input.get_node_shared_ptr());
FRONT_END_OP_CONVERSION_CHECK(c_node, "Translation for prim::ListConstruct support only constant inputs");
if (c_node->get_shape().size() == 0) {
c_node = std::make_shared<opset10::Constant>(c_node->get_element_type(), Shape{1}, c_node->get_data_ptr());
}
consts.push_back(c_node);
}
auto list_construct = std::make_shared<opset10::Concat>(consts, 0);
if (list_construct->has_evaluate()) {
OutputVector replacements(list_construct->get_output_size());
if (list_construct->constant_fold(replacements, list_construct->input_values())) {
return replacements;
}
}
return {context.mark_output(list_construct)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,72 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_loop(NodeContext& context) {
auto loop = std::make_shared<opset10::Loop>(context.get_input(0), context.get_input(1));
auto decoder = context.get_decoder();
FRONT_END_OP_CONVERSION_CHECK(decoder->get_subgraph_size() == 1, "Loop must have 1 subgraph.");
auto subgraph_decoder = decoder->get_subgraph_decoder(0);
auto body = context.convert_subgraph(0);
loop->set_function(body);
opset10::Loop::SpecialBodyPorts spec_ports{0, 0};
loop->set_special_body_ports(spec_ports);
auto inputs = subgraph_decoder->inputs();
std::set<size_t> input_idxs(inputs.begin(), inputs.end());
std::map<size_t, ParameterVector> inputs_map;
auto body_parameters = body->get_parameters();
// #0 parameter is counter
for (int i = 1; i < body_parameters.size(); i++) {
auto param = body_parameters[i];
auto name = param->get_output_tensor(0).get_any_name();
size_t input_idx = (size_t)std::stoll(name);
if (inputs_map.count(input_idx)) {
inputs_map[input_idx] = {param};
} else {
inputs_map[input_idx].push_back(param);
}
}
for (const auto& input : inputs_map) {
if (!input_idxs.count(input.first)) {
auto external_output = context.get_tensor_from_model_or_create_input(input.first);
loop->set_invariant_inputs(external_output, input.second);
} else {
auto external_output = context.get_tensor_from_model(input.first);
if (external_output.get_node()) {
loop->set_invariant_inputs(external_output, input.second);
}
}
}
// TODO: Connect back edges (merged inputs)
auto body_results = body->get_results();
FRONT_END_OP_CONVERSION_CHECK(body_results.size() > 0, "At least one output from loop is required - condition.");
std::set<size_t> output_idxs;
// 0 output is condition, do not need to connect it
for (int i = 1; i < body_results.size(); i++) {
auto result = body_results[i];
auto name = result->input(0).get_tensor().get_any_name();
size_t out_idx = (size_t)std::stoll(name);
FRONT_END_OP_CONVERSION_CHECK(output_idxs.count(out_idx) == 0,
"More then one body output with same tensor name.");
output_idxs.insert(out_idx);
context.add_tensor_to_context(out_idx, loop->get_iter_value(result, -1));
}
loop->validate_and_infer_types();
return {context.mark_node(loop)->outputs()};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_masked_fill(NodeContext& context) {
auto data = context.get_input(0);
auto mask = context.get_input(1);
auto value = context.const_input<float>(2);
auto data_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(data));
auto value_const = context.mark_node(opset10::Constant::create(element::f32, Shape({}), {value}));
auto broadcasted_value = context.mark_node(std::make_shared<opset10::Broadcast>(value_const, data_shape));
auto bool_mask = context.mark_node(std::make_shared<opset10::Convert>(mask, element::boolean));
return {context.mark_node(std::make_shared<opset10::Select>(bool_mask, broadcasted_value, data))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,33 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_max_poolnd(NodeContext& context) {
auto kernel = context.const_input<Shape>(1);
auto strides = context.const_input<Strides>(2);
auto pads = context.const_input<Shape>(3); // pytorch supports only symmetric paddings
auto dilations = context.const_input<Strides>(4);
auto rounding_type = context.const_input<bool>(5) ? ov::op::RoundingType::CEIL : ov::op::RoundingType::FLOOR;
return {context.mark_node(std::make_shared<opset10::MaxPool>(context.get_input(0),
strides,
dilations,
pads,
pads,
kernel,
rounding_type))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,26 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_mean(NodeContext& context) {
auto x = context.get_input(0);
auto y = context.get_input(1);
auto keep_dims = context.const_input<bool>(2);
FRONT_END_OP_CONVERSION_CHECK(context.input_is_none(3),
"Only False is supported for input with index 3 for aten::mean");
return {context.mark_node(std::make_shared<opset10::ReduceMean>(x, y, keep_dims))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,76 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_max(NodeContext& context) {
// torch.max (same for torch.min) actually has two interfaces smashed together:
// torch.max(x, dim, keepdim) and torch.max(x, y)
auto x = context.get_input(0);
// torch.max(input)
if (context.input_is_none(1) & context.input_is_none(2)) {
auto axes = get_axes_range(context, 0);
return {context.mark_node(std::make_shared<opset10::ReduceMax>(x, axes, false))};
}
// torch.max(input, other)
if (context.input_is_none(2)) {
auto y = context.get_input(1);
return {context.mark_node(std::make_shared<opset10::Maximum>(x, y))};
}
// torch.max(input, dim, keepdim), returns values and indicies
auto axes_node = context.get_input(1);
auto axis_const = context.const_input<int64_t>(1);
auto keepdims = context.const_input<bool>(2);
auto values = context.mark_node(std::make_shared<opset10::ReduceMax>(x, axes_node, keepdims));
auto k = context.mark_node(std::make_shared<opset10::Constant>(element::i64, Shape{}, 1));
auto topk =
std::make_shared<opset10::TopK>(x, k, axis_const, opset10::TopK::Mode::MAX, opset10::TopK::SortType::NONE);
auto indicies = context.mark_node(std::make_shared<opset10::Convert>(topk->output(1), element::i64));
if (!keepdims) {
indicies = std::make_shared<opset10::Squeeze>(indicies, axes_node);
}
return {values, indicies};
};
OutputVector translate_min(NodeContext& context) {
// torch.min (same for torch.max) actually has two interfaces smashed together:
// torch.min(x, dim, keepdim) and torch.min(x, y)
auto x = context.get_input(0);
// torch.min(input)
if (context.input_is_none(1) & context.input_is_none(2)) {
auto axes = get_axes_range(context, 0);
return {context.mark_node(std::make_shared<opset10::ReduceMin>(x, axes, false))};
}
// torch.min(input, other)
if (context.input_is_none(2)) {
auto y = context.get_input(1);
return {context.mark_node(std::make_shared<opset10::Minimum>(x, y))};
}
// torch.min(input, dim, keepdim), returns values and indicies
auto axes_node = context.get_input(1);
auto axis_const = context.const_input<int64_t>(1);
auto keepdims = context.const_input<bool>(2);
auto values = context.mark_node(std::make_shared<opset10::ReduceMin>(x, axes_node, keepdims));
auto k = context.mark_node(std::make_shared<opset10::Constant>(element::i64, Shape{}, 1));
auto topk =
std::make_shared<opset10::TopK>(x, k, axis_const, opset10::TopK::Mode::MIN, opset10::TopK::SortType::NONE);
auto indicies = context.mark_node(std::make_shared<opset10::Convert>(topk->output(1), element::i64));
if (!keepdims) {
indicies = std::make_shared<opset10::Squeeze>(indicies, axes_node);
}
return {values, indicies};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_neg(NodeContext& context) {
auto x = context.get_input(0);
auto const_neg_1 = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {-1}));
auto cast = context.mark_node(std::make_shared<opset10::ConvertLike>(const_neg_1, x));
return {context.mark_node(std::make_shared<opset10::Multiply>(x, cast))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,40 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset9.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_nms(NodeContext& context) {
auto const_0 = context.mark_node(opset9::Constant::create(element::i64, Shape{}, {0}));
auto const_1 = context.mark_node(opset9::Constant::create(element::i64, Shape{}, {1}));
auto const_2 = context.mark_node(opset9::Constant::create(element::i64, Shape{1}, {2}));
// the shape that is required by PyTorch operator differs from the shape required in OpenVino
auto boxes_shape = context.mark_node(opset9::Constant::create(element::i64, Shape{3}, {1, -1, 4}));
auto boxes = context.mark_node(std::make_shared<opset9::Reshape>(context.get_input(0), boxes_shape, false));
// Unsqueeze operator is also used to align shapes required by PyTorch and OpenVino
auto axis_01 = context.mark_node(opset9::Constant::create(element::i64, Shape{2}, {0, 1}));
auto scores = context.mark_node(std::make_shared<opset9::Unsqueeze>(context.get_input(1), axis_01));
auto max_output_per_class =
context.mark_node(opset9::Constant::create(element::i64, Shape{1}, {std::numeric_limits<int64_t>::max()}));
auto iou_threshold = context.get_input(2);
auto nms_out = context.mark_node(
std::make_shared<opset9::NonMaxSuppression>(boxes, scores, max_output_per_class, iou_threshold));
auto select = context.mark_node(std::make_shared<opset9::Gather>(nms_out, const_2, const_1));
auto squeeze = std::make_shared<opset9::Squeeze>(select, const_1);
return {context.mark_node(squeeze)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_nonzero(NodeContext& context) {
auto cond = context.get_input(0);
auto non_zero = context.mark_node(std::make_shared<opset10::NonZero>(cond));
auto input_order = context.mark_node(opset10::Constant::create(element::i64, Shape{2}, {1, 0}));
return {context.mark_node(std::make_shared<opset10::Transpose>(non_zero, input_order))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,52 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_norm(NodeContext& context) {
auto input_tensor = context.get_input(0);
auto p = context.const_input<float>(1);
auto dim = context.get_input(2);
auto keep_dim = context.const_input<bool>(3);
OutputVector res;
if (p == 1) {
auto reduce_l1 = context.mark_node(std::make_shared<opset10::ReduceL1>(input_tensor, dim, keep_dim));
res.push_back(reduce_l1);
} else if (p == 2) {
auto reduce_l2 = context.mark_node(std::make_shared<opset10::ReduceL2>(input_tensor, dim, keep_dim));
res.push_back(reduce_l2);
} else if (p == std::numeric_limits<float>::infinity()) {
auto abs = context.mark_node(std::make_shared<opset10::Abs>(input_tensor));
auto max = context.mark_node(std::make_shared<opset10::ReduceMax>(abs, dim, keep_dim));
res.push_back(max);
} else if (p == -std::numeric_limits<float>::infinity()) {
auto abs = context.mark_node(std::make_shared<opset10::Abs>(input_tensor));
auto min = context.mark_node(std::make_shared<opset10::ReduceMin>(abs, dim, keep_dim));
res.push_back(min);
} else {
auto const_p = context.mark_node(opset10::Constant::create(element::f64, Shape{1}, {p}));
auto const_p_inv = context.mark_node(opset10::Constant::create(element::f64, Shape{1}, {1.0 / p}));
auto abs = context.mark_node(std::make_shared<opset10::Abs>(input_tensor));
auto pow = context.mark_node(std::make_shared<opset10::Power>(abs, const_p));
auto sum = context.mark_node(std::make_shared<opset10::ReduceSum>(pow, dim, keep_dim));
auto pow_inv = context.mark_node(std::make_shared<opset10::Power>(sum, const_p_inv));
res.push_back(pow_inv);
}
return res;
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,21 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_numel(NodeContext& context) {
return {numel(context, 0)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,111 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/core/coordinate_diff.hpp"
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_pad(NodeContext& context) {
auto data = context.get_input(0);
auto paddings = context.const_input<std::vector<int64_t>>(1);
std::string mode = "constant";
auto shape = context.mark_node(std::make_shared<opset10::ShapeOf>(data, element::i32));
auto rank = context.mark_node(std::make_shared<opset10::ShapeOf>(shape, element::i32));
auto reduced_rank = context.mark_node(std::make_shared<opset10::Squeeze>(rank));
auto zero = context.mark_node(opset10::Constant::create(element::i32, Shape{}, {0}));
auto zero_f = context.mark_node(opset10::Constant::create(element::f32, Shape{}, {0}));
auto pad_size_half = paddings.size() / 2;
std::vector<int64_t> pad_b(pad_size_half, 0);
std::vector<int64_t> pad_e(pad_size_half, 0);
for (int i = 0; i < pad_size_half; i++) {
pad_b[i] = paddings[paddings.size() - 2 - 2 * i];
pad_e[i] = paddings[paddings.size() - 1 - 2 * i];
}
auto pads_begin_short = context.mark_node(opset10::Constant::create(element::i32, Shape{pad_size_half}, pad_b));
auto pads_end_short = context.mark_node(opset10::Constant::create(element::i32, Shape{pad_size_half}, pad_e));
auto pads_short_len = context.mark_node(opset10::Constant::create(element::i32, Shape{1}, {pad_size_half}));
auto pads_diff = context.mark_node(std::make_shared<opset10::Subtract>(rank, pads_short_len));
auto pads_remaining = context.mark_node(std::make_shared<opset10::Broadcast>(zero, pads_diff));
auto pads_begins =
context.mark_node(std::make_shared<opset10::Concat>(NodeVector{pads_remaining, pads_begin_short}, 0));
auto pads_ends =
context.mark_node(std::make_shared<opset10::Concat>(NodeVector{pads_remaining, pads_end_short}, 0));
if (!context.input_is_none(2)) {
mode = context.const_input<std::string>(2);
}
if (mode == "circular") {
int64_t pad_l;
int64_t pad_r;
auto pad_last_id = paddings.size();
auto cur = data.get_node_shared_ptr();
auto step = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {1}));
for (auto i = 0; i < pad_size_half; i++) {
ov::NodeVector tensors;
pad_r = paddings[pad_last_id - (2 * i + 1)];
pad_l = paddings[pad_last_id - (2 * i + 2)];
auto axes = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {2 + i}));
if (pad_l > 0) {
auto start =
context.mark_node(context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {-pad_l})));
auto end = context.mark_node(std::make_shared<opset10::Gather>(
shape,
context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {2 + i})),
context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {0}))));
auto left = context.mark_node(std::make_shared<opset10::Slice>(cur, start, end, step, axes));
tensors.push_back(left);
}
if (pad_l < 0 || pad_r < 0) {
auto start = context.mark_node(
context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {pad_l < 0 ? -pad_l : 0})));
auto end = context.mark_node(
context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {pad_r < 0 ? pad_r : 0})));
auto middle = context.mark_node(std::make_shared<opset10::Slice>(cur, start, end, step, axes));
tensors.push_back(middle);
} else {
tensors.push_back(cur);
}
if (pad_r > 0) {
auto start = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {0}));
auto end = context.mark_node(opset10::Constant::create(element::i64, Shape{1}, {pad_r}));
auto right = context.mark_node(std::make_shared<opset10::Slice>(cur, start, end, step, axes));
tensors.push_back(right);
}
if (tensors.size()) {
cur = context.mark_node(std::make_shared<opset10::Concat>(tensors, 2 + i));
}
}
return {cur};
}
if (mode == "constant") {
if (!context.input_is_none(3)) {
auto pad_value = context.get_input(3);
return {context.mark_node(
std::make_shared<opset10::Pad>(data, pads_begins, pads_ends, pad_value, ov::op::PadMode::CONSTANT))};
}
return {context.mark_node(
std::make_shared<opset10::Pad>(data, pads_begins, pads_ends, zero_f, ov::op::PadMode::CONSTANT))};
}
if (mode == "reflect") {
return {context.mark_node(
std::make_shared<opset10::Pad>(data, pads_begins, pads_ends, zero_f, ov::op::PadMode::REFLECT))};
}
if (mode == "replicate") {
return {context.mark_node(
std::make_shared<opset10::Pad>(data, pads_begins, pads_ends, zero_f, ov::op::PadMode::EDGE))};
}
FRONT_END_OP_CONVERSION_CHECK(false, "aten::pad conversion doesn't support [ " + mode + " ] padding mode");
}
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,25 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_reciprocal(NodeContext& context) {
auto x = context.get_input(0);
auto const_neg_1 = opset10::Constant::create(element::i32, Shape{}, {-1});
auto cast = std::make_shared<opset10::ConvertLike>(const_neg_1, x);
auto power = std::make_shared<opset10::Power>(x, cast);
return {context.mark_node(power)};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,22 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_relu6(NodeContext& context) {
auto x = context.get_input(0);
return {context.mark_node(std::make_shared<opset10::Clamp>(x, 0., 6.))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_repeat(NodeContext& context) {
auto x = context.get_input(0);
auto repeats = context.get_input(1);
auto one = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {1}));
auto sizes_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(repeats, element::i64));
auto expand_shape = context.mark_node(std::make_shared<opset10::Broadcast>(one, sizes_shape));
auto expanded_input =
context.mark_node(std::make_shared<opset10::Broadcast>(x, expand_shape, ov::op::BroadcastType::BIDIRECTIONAL));
return {context.mark_node(std::make_shared<opset10::Tile>(expanded_input, repeats))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,41 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "pt_framework_node.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_reshape(NodeContext& context) {
auto shape_node = context.get_input(1).get_node();
auto shape_node_fw_node = dynamic_cast<PtFrameworkNode*>(shape_node);
std::shared_ptr<ov::Node> reshape;
// TODO: move this to transform stage
if (shape_node_fw_node && shape_node_fw_node->get_decoder()->get_op_type() == "prim::ListConstruct") {
OutputVector inputs;
auto axis_0 = context.mark_node(opset10::Constant::create(element::i64, Shape{}, {0}));
for (auto& input : shape_node->inputs()) {
auto rank = input.get_partial_shape().rank();
FRONT_END_OP_CONVERSION_CHECK(rank.is_dynamic() || rank.get_length() == 0, "Rank must be 0");
auto unsqueeze = context.mark_node(std::make_shared<opset10::Unsqueeze>(input.get_source_output(), axis_0));
inputs.push_back(unsqueeze);
}
auto concat = context.mark_node(std::make_shared<opset10::Concat>(inputs, 0));
reshape = context.mark_node(std::make_shared<opset10::Reshape>(context.get_input(0), concat, false));
} else {
reshape =
context.mark_node(std::make_shared<opset10::Reshape>(context.get_input(0), context.get_input(1), false));
}
return {reshape};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_reshape_as(NodeContext& context) {
auto input_tensor = context.get_input(0);
auto shape_tesnor = context.get_input(1);
auto desired_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(shape_tesnor));
return {context.mark_node(std::make_shared<opset10::Reshape>(input_tensor, desired_shape, false))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,37 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_roll(NodeContext& context) {
const auto data = context.get_input(0);
const auto shifts = context.get_input(1);
const auto axes = context.get_input(2);
const auto shifts_pshape = shifts.get_partial_shape();
const auto axes_pshape = axes.get_partial_shape();
const auto match_dims = axes_pshape.compatible(shifts_pshape);
if (!match_dims) {
const auto const_minus_1 = opset10::Constant::create(element::i32, Shape{1}, {-1});
const auto axis_0 = opset10::Constant::create(element::i32, Shape{1}, {0});
const auto flat = std::make_shared<opset10::Reshape>(data, const_minus_1, false);
const auto roll = std::make_shared<opset10::Roll>(flat, shifts, axis_0);
const auto shape_of_data = std::make_shared<opset10::ShapeOf>(data);
const auto reshape = std::make_shared<opset10::Reshape>(roll, shape_of_data, false);
context.mark_nodes({const_minus_1, flat, roll, shape_of_data, reshape});
return {reshape};
}
return {context.mark_node(std::make_shared<opset10::Roll>(data, shifts, axes))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

View File

@ -0,0 +1,25 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/pytorch/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace pytorch {
namespace op {
OutputVector translate_rsqrt(NodeContext& context) {
auto data = context.get_input(0);
auto input_shape = context.mark_node(std::make_shared<opset10::ShapeOf>(data));
auto one_const = context.mark_node(opset10::Constant::create(element::f32, Shape({}), {1}));
auto sqrt_data = context.mark_node(std::make_shared<opset10::Sqrt>(data));
return {context.mark_node(std::make_shared<opset10::Divide>(one_const, sqrt_data))};
};
} // namespace op
} // namespace pytorch
} // namespace frontend
} // namespace ov

Some files were not shown because too many files have changed in this diff Show More