Tensorflow Lite frontend (#14977)

* Infrastructure for tflite

* Removed submodule flatbuffers

* Added flatbuffers submodule. Fixed version to v22.12.06 aka acf39ff

* Move headers back

* Flatbuffers integration

* Small fixes

* Started parsing the Model

* flatbuffer changes

* decoder_flatbuffer changes

* Lite Input Model -- not needed as of now but looks cool

* Rolled back inherritance from ov::frontend::tensorflow::InputModel

* Results are not treated as outputs, but its ok

* Fix missplaced input vs output

* Refactor

* Load model op-by-op. Frontend API finalized

* Debugging still, there are prints here and there. Decoder is not sane

* Convolution with all attributes is translated and quantization is applied for inputs and constatants. TODO: quantize intermediate tensors, separate decoder specific logic?

* Float ssd and posenet models are showing good accuracy

* Need to refactor but work flawlessly

* Telemetry and lightweight model cutting

* Code style and test changes. Extensions supported

* Quantization and style

* Style refinements

* Move onednn back

* New portion of operations enabled

* TFLite FE doesn't inherrit TF FE

* Moved files to another directory

* Rename header op_table.hpp to common_op_table.hpp for all files in src/frontends/tensorflow_common/src/op/

* Removed visability macroses

* CMake changes

* Unit-test execution in .ci

* Update labeler.yml

* Codeowners

* Style check and fix

* Static Build arrangement

* Addressing the comments

* install common headers to previous place

* New approach with public decoder and graph_iterator

* New approach with public decoder and graph_iterator

* Move GraphIterator back

* Comments addressed

* Comments adressed

* Preliminary TF FE README.md changes

* Added target_compile_definitions OPENVINO_STATIC_LIBRARY for static build

* Fixed conflicts and added TF to common places

* Frontends use only openvino::core::dev API

* Merged common tensorflow changes and made code build and work on selective number of models

* Style

* Rollback unnecessary changes from Tensorflow FE

* Rollback unnecessary changes from Tensorflow Common

* Minor refactor

* cmake minor refactoring

* Mixed commit

* Style and merge fix

* Low hanging fruit operations

* Fix windows build

* Refactor quantization parameters representation

* license compliance. approved by OS PDT

* copyrights in generic file

* dependabot

* labeler

* Unit Test to be triggered in CI

* cmake variables naming. corrected copyright years in copyrights/generic file

* library renamed in .ci/ calls

* Copyright year update

* Set openvino-tf-frontend-maintainers as owner of /src/frontends/tensorflow_lite/

* Fixed flatc corss-compilation

* Cleaned flatbuffers header usage

* Nitpicks solved

* Update cmake/templates/OpenVINOConfig.cmake.in

* Compile with flatbuffers headers

* Fixed "which is prefixed in the source directory"

* Fixed typo in flatbuffers cmake

* Removed flatbuffers submodule

* Added fork submodule

* Fixed static build

* Fixed cross-compilatio

* Fixed -Wshadow warning

* Fixed warning on Windows

* Use only headers from flatbuffers library

* Added LTO and fixed compilation errors on Windows

* Fixed warnings in tensorflow_common

* Move ctors implementation to cpp file

* Added information about new frontends to common FEm part

* Temporaryily disable warnings

* Fixed code style using clang-format

* Fixed Windows

* reverted changes in onnx

* Revert changes in onnx_common

* Removed pragma once frm cpp

Co-authored-by: missjane <estepyreva@gmail.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
This commit is contained in:
Evgenya Stepyreva
2023-01-27 06:27:59 +04:00
committed by GitHub
parent d8d0083744
commit 0513a79a79
118 changed files with 5445 additions and 141 deletions

View File

@@ -397,6 +397,9 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_tensorflow_common_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowCommon.xml
displayName: 'TensorFlow Common Unit Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowLite.xml
displayName: 'TensorFlow Lite Frontend Unit Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_lp_transformations_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-LpTransformations.xml
displayName: 'Low Precision Transformations Tests'

View File

@@ -331,6 +331,12 @@ jobs:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'TensorFlow Common Unit Tests'
- script: |
$(INSTALL_TEST_DIR)/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-TensorflowLite.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'TensorFlow Lite Frontend Unit Tests'
- script: $(INSTALL_TEST_DIR)/ov_cpu_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_unit_tests.xml
displayName: 'Intel CPU Unit Tests'

View File

@@ -201,10 +201,6 @@ jobs:
displayName: 'ONNX Frontend Tests'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/InferenceEngineUnitTests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineUnitTests.xml
displayName: 'IE UT old'
enabled: 'false'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_cpu_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_cpu_unit_tests.xml
displayName: 'Intel CPU Unit Tests'
enabled: 'false'

View File

@@ -281,6 +281,9 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_tensorflow_common_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-TensorflowCommon.xml
displayName: 'TensorFlow Common Unit Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_tensorflow_lite_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-TensorflowLite.xml
displayName: 'TensorFlow Lite Frontend Unit Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_lp_transformations_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\LpTransformations.xml
displayName: 'Low Precision Transformations Tests'

1
.github/CODEOWNERS vendored
View File

@@ -79,6 +79,7 @@
/src/frontends/paddle/ @openvinotoolkit/openvino-ie-paddle-maintainers
/src/frontends/tensorflow/ @openvinotoolkit/openvino-tf-frontend-maintainers
/src/frontends/tensorflow_common/ @openvinotoolkit/openvino-tf-frontend-maintainers
/src/frontends/tensorflow_lite/ @openvinotoolkit/openvino-tf-frontend-maintainers
/src/frontends/pytorch/ @openvinotoolkit/openvino-pytorch-frontend-maintainers
# OpenVINO ONNX Frontend:

View File

@@ -119,6 +119,21 @@ updates:
- "p-wysocki"
versioning-strategy: increase-if-necessary
# TensorFlow Lite FE tests requirements
- package-ecosystem: pip
directory: "/src/frontends/tensorflow_lite/tests/"
schedule:
interval: "daily"
time: "09:00"
timezone: "Asia/Dubai"
open-pull-requests-limit: 3
assignees:
- "jane-intel"
- "rkazants"
- "jiwaszki"
- "p-wysocki"
versioning-strategy: increase-if-necessary
#
# Python Samples
#

9
.github/labeler.yml vendored
View File

@@ -27,9 +27,11 @@
'category: CPP API':
- 'src/inference/include/**/*'
- 'src/core/include/**/*'
- 'src/frontends/common/**/*'
- 'src/frontends/common/include/**/*'
- 'src/frontends/onnx/frontend/include/**/*'
- 'src/frontends/tensorflow/include/**/*'
- 'src/frontends/tensorflow_lite/include/**/*'
- 'src/frontends/pytorch/include/**/*'
- 'src/frontends/paddle/include/**/*'
'category: CPU':
@@ -121,6 +123,11 @@
- 'src/frontends/tensorflow_common/**/*'
- 'tests/layer_tests/tensorflow_tests/**/*'
'category: TFL FE':
- 'src/frontends/tensorflow_lite/**/*'
- 'src/frontends/tensorflow_common/**/*'
- 'tests/layer_tests/tensorflow_lite_tests/**/*'
'category: PyTorch FE':
- 'src/frontends/pytorch/**/*'
- 'tests/layer_tests/pytorch_tests/**/*'

4
.gitmodules vendored
View File

@@ -63,3 +63,7 @@
path = thirdparty/json/nlohmann_json
url = https://github.com/nlohmann/json.git
shallow = true
[submodule "thirdparty/flatbuffers/flatbuffers"]
path = thirdparty/flatbuffers/flatbuffers
url = https://github.com/ilya-lavrenov/flatbuffers.git
branch = cmake-3-13-fixes

View File

@@ -310,6 +310,7 @@ function(ov_mark_target_as_cc)
endfunction()
include(python_requirements)
include(native_compile)
# Code style utils

View File

@@ -38,7 +38,7 @@ ie_option (ENABLE_UB_SANITIZER "enable UndefinedBahavior sanitizer" OFF)
ie_option (ENABLE_THREAD_SANITIZER "enable checking data races via ThreadSanitizer" OFF)
ie_dependent_option (ENABLE_COVERAGE "enable code coverage" OFF "CMAKE_CXX_COMPILER_ID STREQUAL GNU OR OV_COMPILER_IS_CLANG" OFF)
ie_dependent_option (ENABLE_COVERAGE "enable code coverage" OFF "CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG" OFF)
# Defines CPU capabilities

View File

@@ -155,11 +155,27 @@ macro(ov_add_frontend)
list(APPEND PROTO_HDRS "${OUTPUT_PB_HEADER}")
endforeach()
file(GLOB flatbuffers_schema_files ${frontend_root_dir}/src/schema/*.fbs)
foreach(INFILE IN LISTS flatbuffers_schema_files)
get_filename_component(FILE_WE ${INFILE} NAME_WE)
set(OUTPUT_FC_HEADER ${CMAKE_CURRENT_BINARY_DIR}/${FILE_WE}_generated.h)
set(GENERATED_PROTO ${INFILE})
add_custom_command(
OUTPUT "${OUTPUT_FC_HEADER}"
COMMAND ${flatbuffers_COMPILER} ARGS -c --gen-mutable -o ${CMAKE_CURRENT_BINARY_DIR} ${INFILE}
DEPENDS ${flatbuffers_DEPENDENCY} ${GENERATED_PROTO}
COMMENT "Running C++ flatbuffers compiler (${flatbuffers_COMPILER}) on ${GENERATED_PROTO}"
VERBATIM
COMMAND_EXPAND_LISTS)
list(APPEND PROTO_HDRS "${OUTPUT_FC_HEADER}")
endforeach()
# Disable all warnings for generated code
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED TRUE)
# Create library
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS} ${PROTO_SRCS} ${PROTO_HDRS})
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS}
${PROTO_SRCS} ${PROTO_HDRS} ${flatbuffers_schema_files} ${proto_files})
if(OV_FRONTEND_LINKABLE_FRONTEND)
# create beautiful alias
@@ -234,8 +250,12 @@ macro(ov_add_frontend)
endif()
endif()
if(flatbuffers_schema_files)
target_include_directories(${TARGET_NAME} SYSTEM PRIVATE ${flatbuffers_INCLUDE_DIRECTORIES})
endif()
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS})
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${flatbuffers_schema_files})
add_dependencies(ov_frontends ${TARGET_NAME})

View File

@@ -0,0 +1,100 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(ExternalProject)
#
# ov_native_compile_external_project(
# TARGET_NAME <name>
# NATIVE_INSTALL_DIR <source dir>
# NATIVE_TARGETS <target1 target2 ..>
# [NATIVE_SOURCE_SUBDIR <subdir>]
# [CMAKE_ARGS <option1 option2 ...>]
# )
#
function(ov_native_compile_external_project)
set(oneValueRequiredArgs NATIVE_INSTALL_DIR TARGET_NAME NATIVE_SOURCE_SUBDIR)
set(multiValueArgs CMAKE_ARGS NATIVE_TARGETS)
cmake_parse_arguments(ARG "" "${oneValueRequiredArgs};${oneValueOptionalArgs}" "${multiValueArgs}" ${ARGN})
if(YOCTO_AARCH64)
# need to unset several variables which can set env to cross-environment
foreach(var SDKTARGETSYSROOT CONFIG_SITE OECORE_NATIVE_SYSROOT OECORE_TARGET_SYSROOT
OECORE_ACLOCAL_OPTS OECORE_BASELIB OECORE_TARGET_ARCH OECORE_TARGET_OS CC CXX
CPP AS LD GDB STRIP RANLIB OBJCOPY OBJDUMP READELF AR NM M4 TARGET_PREFIX
CONFIGURE_FLAGS CFLAGS CXXFLAGS LDFLAGS CPPFLAGS KCFLAGS OECORE_DISTRO_VERSION
OECORE_SDK_VERSION ARCH CROSS_COMPILE OE_CMAKE_TOOLCHAIN_FILE OPENSSL_CONF
OE_CMAKE_FIND_LIBRARY_CUSTOM_LIB_SUFFIX PKG_CONFIG_SYSROOT_DIR PKG_CONFIG_PATH)
if(DEFINED ENV{${var}})
list(APPEND cmake_env --unset=${var})
endif()
endforeach()
# filter out PATH from yocto locations
string(REPLACE ":" ";" custom_path "$ENV{PATH}")
foreach(path IN LISTS custom_path)
if(NOT path MATCHES "^$ENV{OECORE_NATIVE_SYSROOT}")
list(APPEND clean_path "${path}")
endif()
endforeach()
find_host_program(NATIVE_CMAKE_COMMAND
NAMES cmake
PATHS ${clean_path}
DOC "Host cmake"
REQUIRED
NO_DEFAULT_PATH)
else()
set(NATIVE_CMAKE_COMMAND "${CMAKE_COMMAND}")
endif()
# if env has CMAKE_TOOLCHAIN_FILE, we need to skip it
if(DEFINED ENV{CMAKE_TOOLCHAIN_FILE})
list(APPEND cmake_env --unset=CMAKE_TOOLCHAIN_FILE)
endif()
# compile flags
if(CMAKE_COMPILER_IS_GNUCXX)
set(compile_flags "-Wno-undef -Wno-error -Wno-deprecated-declarations")
endif()
if(ARG_NATIVE_SOURCE_SUBDIR)
set(ARG_NATIVE_SOURCE_SUBDIR SOURCE_SUBDIR ${ARG_NATIVE_SOURCE_SUBDIR})
endif()
ExternalProject_Add(${ARG_TARGET_NAME}
# Directory Options
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"
PREFIX "${CMAKE_CURRENT_BINARY_DIR}"
BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/build"
INSTALL_DIR "${ARG_NATIVE_INSTALL_DIR}"
# Configure Step Options:
CMAKE_COMMAND
${NATIVE_CMAKE_COMMAND}
CMAKE_ARGS
"-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}"
"-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}"
"-DCMAKE_CXX_LINKER_LAUNCHER=${CMAKE_CXX_LINKER_LAUNCHER}"
"-DCMAKE_C_LINKER_LAUNCHER=${CMAKE_C_LINKER_LAUNCHER}"
"-DCMAKE_CXX_FLAGS=${compile_flags}"
"-DCMAKE_C_FLAGS=${compile_flags}"
"-DCMAKE_POLICY_DEFAULT_CMP0069=NEW"
"-DCMAKE_INSTALL_PREFIX=${ARG_NATIVE_INSTALL_DIR}"
"-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
${ARG_CMAKE_ARGS}
CMAKE_GENERATOR "${CMAKE_GENERATOR}"
${ARG_NATIVE_SOURCE_SUBDIR}
# Build Step Options:
BUILD_COMMAND
${NATIVE_CMAKE_COMMAND}
--build "${CMAKE_CURRENT_BINARY_DIR}/build"
--config Release
--parallel
-- ${ARG_NATIVE_TARGETS}
# Test Step Options:
TEST_EXCLUDE_FROM_MAIN ON
# Target Options:
EXCLUDE_FROM_ALL ON
)
endfunction()

View File

@@ -149,11 +149,15 @@ ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at run
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Use system protobuf" OFF
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Use system flatbuffers" OFF
"ENABLE_OV_TF_LITE_FRONTEND;BUILD_SHARED_LIBS" OFF)
ie_dependent_option(ENABLE_OV_CORE_UNIT_TESTS "Enables OpenVINO core unit tests" ON "ENABLE_TESTS" OFF)
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)

View File

@@ -760,3 +760,8 @@ Files: \thirdparty\*
Comment: onnx
Copyright: authors
License: Apache
Files: *
Comment: FlatBuffers
Copyright: Copyright 2014-2022 Google Inc. All rights reserved.
License: Apache

View File

@@ -274,6 +274,19 @@ macro(ov_cpack_settings)
set(pytorch_copyright "generic")
endif()
if(ENABLE_OV_TF_LITE_FRONTEND)
set(CPACK_COMPONENT_TENSORFLOW_LITE_DESCRIPTION "OpenVINO TensorFlow Lite Frontend")
set(CPACK_COMPONENT_TENSORFLOW_LITE_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_TENSORFLOW_LITE_PACKAGE_NAME "libopenvino-tensorflow-lite-frontend-${cpack_name_ver}")
# since we TF Lite FE is linkable target, we need to call ldconfig (i.e. `def_triggers`)
set(CPACK_DEBIAN_TENSORFLOW_LITE_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm};${def_triggers}")
ov_debian_add_lintian_suppression(tensorflow_lite
# we have different package name strategy; it suggests libopenvino-tensorflow-lite-frontend202230
"package-name-doesnt-match-sonames")
list(APPEND frontends tensorflow_lite)
set(tensorflow_lite_copyright "generic")
endif()
#
# core_dev: depends on core and frontends (since frontends don't want to provide its own dev packages)
#

View File

@@ -235,6 +235,15 @@ macro(ov_cpack_settings)
set(pytorch_copyright "generic")
endif()
if(ENABLE_OV_TF_LITE_FRONTEND)
set(CPACK_COMPONENT_TENSORFLOW_LITE_DESCRIPTION "OpenVINO TensorFlow Lite Frontend")
set(CPACK_RPM_TENSORFLOW_LITE_PACKAGE_NAME "libopenvino-tensorflow-lite-frontend-${cpack_name_ver}")
set(CPACK_RPM_TENSORFLOW_LITE_POST_INSTALL_SCRIPT_FILE "${def_triggers}")
set(CPACK_RPM_TENSORFLOW_LITE_POST_UNINSTALL_SCRIPT_FILE "${def_triggers}")
_ov_add_package(frontend_packages tensorflow_lite)
set(tensorflow_lite_copyright "generic")
endif()
#
# core_dev: depends on core and frontends (since frontends don't want to provide its own dev packages)
#

View File

@@ -14,6 +14,7 @@
# * `Paddle`: OpenVINO Paddle frontend
# * `PyTorch`: OpenVINO PyTorch frontend
# * `TensorFlow`: OpenVINO TensorFlow frontend
# * `TensorFlowLite`: OpenVINO TensorFlow Lite frontend
#
# If no components are specified, `Runtime` component is provided:
#
@@ -48,6 +49,9 @@
# `openvino::frontend::tensorflow`
# TensorFlow FrontEnd target (optional)
#
# `openvino::frontend::tensorflow_lite`
# TensorFlow Lite FrontEnd target (optional)
#
# Result variables:
# ------
#
@@ -71,6 +75,9 @@
# `OpenVINO_Frontend_TensorFlow_FOUND`
# OpenVINO TensorFlow frontend is available
#
# `OpenVINO_Frontend_TensorFlowLite_FOUND`
# OpenVINO TensorFlow Lite frontend is available
#
# `OpenVINO_Frontend_IR_FOUND`
# OpenVINO IR frontend is available
#
@@ -299,12 +306,14 @@ set(${CMAKE_FIND_PACKAGE_NAME}_Runtime_FOUND ON)
set(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND @ENABLE_OV_ONNX_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND @ENABLE_OV_PADDLE_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND @ENABLE_OV_TF_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_TensorFlowLite_FOUND @ENABLE_OV_TF_LITE_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND @ENABLE_OV_IR_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND @ENABLE_OV_PYTORCH_FRONTEND@)
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_ONNX_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_Paddle_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_TensorFlow_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_TensorFlowLite_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_TensorFlowLite_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_IR_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND})
set(${CMAKE_FIND_PACKAGE_NAME}_Frontend_PyTorch_FOUND ${${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND})

View File

@@ -1340,3 +1340,210 @@ PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-------------------------------------------------------------
27. flatbuffers (https://github.com/google/flatbuffers/)
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -140,6 +140,13 @@ LIB_INSTALL_CFG = {
"rpath": LIBS_RPATH,
"binary_dir": OPENVINO_BUILD_DIR,
},
"tensorflow_lite_libs": {
"name": "tensorflow_lite",
"prefix": "libs.tensorflow_lite",
"install_dir": OV_RUNTIME_LIBS_DIR,
"rpath": LIBS_RPATH,
"binary_dir": OPENVINO_BUILD_DIR,
},
}
PY_INSTALL_CFG = {

View File

@@ -93,8 +93,6 @@ add_library(${TARGET_NAME}_dev INTERFACE)
add_library(openvino::runtime::dev ALIAS ${TARGET_NAME}_dev)
target_include_directories(${TARGET_NAME}_dev INTERFACE
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/common/transformations/include>
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/core/dev_api>
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/inference/dev_api>
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/common/low_precision_transformations/include>
$<TARGET_PROPERTY:openvino_gapi_preproc,INTERFACE_INCLUDE_DIRECTORIES>)
@@ -102,7 +100,7 @@ target_include_directories(${TARGET_NAME}_dev INTERFACE
target_compile_definitions(${TARGET_NAME}_dev INTERFACE
$<TARGET_PROPERTY:openvino_gapi_preproc,INTERFACE_COMPILE_DEFINITIONS>)
target_link_libraries(${TARGET_NAME}_dev INTERFACE ${TARGET_NAME} openvino::itt openvino::util)
target_link_libraries(${TARGET_NAME}_dev INTERFACE ${TARGET_NAME} openvino::core::dev)
set_ie_threading_interface_for(${TARGET_NAME}_dev)
set_target_properties(${TARGET_NAME}_dev PROPERTIES EXPORT_NAME runtime::dev)

View File

@@ -47,7 +47,13 @@ source_group("include" FILES ${PUBLIC_HEADERS})
add_library(ov_core_dev INTERFACE)
add_library(openvino::core::dev ALIAS ov_core_dev)
target_include_directories(ov_core_dev INTERFACE $<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/core/dev_api>)
target_include_directories(ov_core_dev INTERFACE
$<BUILD_INTERFACE:${OV_CORE_INCLUDE_PATH}>
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/core/dev_api>
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/frontends/common/include>
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/common/transformations/include>)
target_link_libraries(ov_core_dev INTERFACE openvino::itt openvino::util)
set_target_properties(ov_core_dev PROPERTIES EXPORT_NAME core::dev)
openvino_developer_export_targets(COMPONENT core TARGETS openvino::core::dev)
@@ -105,10 +111,7 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
endif()
target_link_options(ngraph_obj ${link_type} "/IGNORE:4217,4286")
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
ie_add_compiler_flags(/wd4267)
endif()
ie_add_compiler_flags(/wd4267)
endif()
# some sources are located in ngraph, while headers are in inference_engine_transformations

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2022 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -24,7 +24,14 @@ if(ENABLE_OV_IR_FRONTEND)
add_subdirectory(ir)
endif()
if(ENABLE_OV_TF_FRONTEND)
if(ENABLE_OV_TF_FRONTEND OR ENABLE_OV_TF_LITE_FRONTEND)
add_subdirectory(tensorflow_common)
endif()
if(ENABLE_OV_TF_FRONTEND)
add_subdirectory(tensorflow)
endif()
if (ENABLE_OV_TF_LITE_FRONTEND)
add_subdirectory(tensorflow_lite)
endif()

View File

@@ -143,16 +143,18 @@ private:
{".xml", {"ir", "ir"}},
{".onnx", {"onnx", "onnx"}},
{".pb", {"tf", "tensorflow"}},
{".tflite", {"tflite", "tensorflow_lite"}},
{".pdmodel", {"paddle", "paddle"}},
// {".ts", {"pytorch", "pytorch"}},
};
// List of prioritized frontends.
std::list<FrontEndNames> priority_list = {
{"ir", "ir"},
{"onnx", "onnx"},
{"tf", "tensorflow"},
{"paddle", "paddle"},
};
std::list<FrontEndNames> priority_list = {{"ir", "ir"},
{"onnx", "onnx"},
{"tf", "tensorflow"},
{"tflite", "tensorflow_lite"},
{"paddle", "paddle"},
{"pytorch", "pytorch"}};
if (variants.empty()) {
return nullptr;
}

View File

@@ -45,7 +45,9 @@ void load_static_plugins(std::vector<PluginInfo>& res) {
{"ir", "ir"},
{"onnx", "onnx"},
{"tf", "tensorflow"},
{"tflite", "tensorflow_lite"},
{"paddle", "paddle"},
{"pytorch", "pytorch"},
};
auto it = predefined_frontends.find(factory.m_name);
if (it != predefined_frontends.end()) {

View File

@@ -13,7 +13,7 @@ ov_add_frontend(NAME onnx
PROTOBUF_LITE
SKIP_NCC_STYLE
FILEDESCRIPTION "FrontEnd to load and convert ONNX file format"
LINK_LIBRARIES ngraph::builder openvino::util onnx_common openvino::runtime::dev)
LINK_LIBRARIES ngraph::builder onnx_common openvino::core::dev)
set(ONNX_OPSET_VERSION 17 CACHE INTERNAL "Supported version of ONNX operator set")
target_compile_definitions(${TARGET_NAME} PRIVATE ONNX_OPSET_VERSION=${ONNX_OPSET_VERSION})

View File

@@ -15,6 +15,7 @@
#include "ngraph/log.hpp"
#include "onnx_common/parser.hpp"
#include "onnx_common/utils.hpp"
#include "openvino/util/file_util.hpp"
#include "utils/common.hpp"
#include "utils/onnx_internal.hpp"
@@ -241,7 +242,7 @@ onnx_editor::ONNXModelEditor::ONNXModelEditor(const std::string& model_path, fro
#if defined(OPENVINO_ENABLE_UNICODE_PATH_SUPPORT) && defined(_WIN32)
onnx_editor::ONNXModelEditor::ONNXModelEditor(const std::wstring& model_path, frontend::ExtensionHolder extensions)
: m_model_path{ngraph::file_util::wstring_to_string(model_path)},
: m_model_path{ov::util::wstring_to_string(model_path)},
m_extensions{std::move(extensions)},
m_pimpl{new ONNXModelEditor::Impl{model_path}, [](Impl* impl) {
delete impl;

View File

@@ -5,8 +5,9 @@
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
ie_add_compiler_flags(/wd4305)
endif()
ov_add_frontend(NAME paddle
LINKABLE_FRONTEND
PROTOBUF_LITE
FILEDESCRIPTION "FrontEnd to load and convert PaddlePaddle file format"
LINK_LIBRARIES openvino::util openvino::runtime::dev)
LINK_LIBRARIES openvino::util openvino::core::dev)

View File

@@ -6,4 +6,4 @@ ov_add_frontend(NAME pytorch
LINKABLE_FRONTEND
SHUTDOWN_PROTOBUF
FILEDESCRIPTION "FrontEnd to load and convert TorchScript models from PyTorch"
LINK_LIBRARIES openvino::util openvino::runtime::dev)
LINK_LIBRARIES openvino::util openvino::core::dev)

View File

@@ -5,7 +5,4 @@
ov_add_frontend(NAME tensorflow
LINKABLE_FRONTEND
FILEDESCRIPTION "FrontEnd to load and convert TensorFlow file format"
LINK_LIBRARIES openvino::util openvino::runtime::dev)
set(TARGET_NAME "${FRONTEND_NAME_PREFIX}tensorflow${FRONTEND_NAME_SUFFIX}")
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::tensorflow_common)
LINK_LIBRARIES openvino::core::dev openvino::frontend::tensorflow_common)

View File

@@ -0,0 +1,3 @@
# do not print messages from TensorFlow
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

View File

@@ -5,17 +5,17 @@ import os
import subprocess
import sys
print(sys.argv)
# do not print messages from TensorFlow
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
if len(sys.argv) < 4:
print("Script[model in pbtxt format], output folder and mark file must be specified as arguments")
print("Script[model in pbtxt format], output folder and mark file must be specified as arguments", str(sys.argv))
exit(1)
gen_script = sys.argv[1]
out_folder = sys.argv[2]
mark_file = sys.argv[3]
print("Processing: {} ".format(gen_script))
if gen_script.endswith('.py'):
subprocess.run([sys.executable, gen_script, out_folder], env=os.environ)
elif gen_script.endswith('.pbtxt'):

View File

@@ -0,0 +1,3 @@
# do not print messages from TensorFlow
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2022 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -33,6 +33,9 @@ public:
std::vector<std::string> get_names() const override {
return m_names;
}
void set_names(const std::vector<std::string>& names) {
m_names = names;
}
private:
const ov::frontend::InputModel& m_input_model;

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2022 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -9,18 +9,21 @@ file(GLOB_RECURSE LIBRARY_SRC ${root_dir}/src/*.cpp)
file(GLOB_RECURSE LIBRARY_HEADERS ${root_dir}/include/*.hpp)
add_library(${TARGET_NAME} STATIC ${LIBRARY_SRC} ${LIBRARY_HEADERS})
add_library(openvino::frontend::tensorflow_common ALIAS ${TARGET_NAME})
if(NOT BUILD_SHARED_LIBS)
target_compile_definitions(${TARGET_NAME} PRIVATE OPENVINO_STATIC_LIBRARY)
endif()
target_link_libraries(${TARGET_NAME} PRIVATE openvino::util)
add_library(openvino::frontend::tensorflow_common ALIAS ${TARGET_NAME})
set_target_properties(${TARGET_NAME} PROPERTIES
INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
target_link_libraries(${TARGET_NAME} PRIVATE openvino::util
PUBLIC openvino::core::dev)
target_include_directories(${TARGET_NAME}
PUBLIC
$<BUILD_INTERFACE:${root_dir}/include>
PRIVATE
${root_dir}/src
$<TARGET_PROPERTY:openvino::runtime::dev,INTERFACE_INCLUDE_DIRECTORIES>)
PUBLIC $<BUILD_INTERFACE:${root_dir}/include>
PRIVATE ${root_dir}/src)
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME})
ov_install_static_lib(${TARGET_NAME} ${OV_CPACK_COMP_CORE})

View File

@@ -37,7 +37,7 @@ OutputVector translate_bias_add_op(const NodeContext& node) {
auto value_rank = value_shape.rank().get_length();
std::vector<int64_t> axes_unsqueeze;
for (size_t dim_ind = 0; dim_ind < value_rank; ++dim_ind) {
for (int64_t dim_ind = 0; dim_ind < value_rank; ++dim_ind) {
if (dim_ind != 1) {
axes_unsqueeze.push_back(dim_ind);
}

View File

@@ -52,6 +52,7 @@ template OutputVector translate_binary_op<Multiply>(const NodeContext& node);
template OutputVector translate_binary_op<Mod>(const NodeContext& node);
template OutputVector translate_binary_op<NotEqual>(const NodeContext& node);
template OutputVector translate_binary_op<Power>(const NodeContext& node);
template OutputVector translate_binary_op<PRelu>(const NodeContext& node);
template OutputVector translate_binary_op<Divide>(const NodeContext& node);
template OutputVector translate_binary_op<SquaredDifference>(const NodeContext& node);
template OutputVector translate_binary_op<Subtract>(const NodeContext& node);

View File

@@ -19,7 +19,7 @@ OutputVector translate_einsum_op(const NodeContext& node) {
OutputVector inputs;
for (size_t input_ind = 0; input_ind < node.get_input_size(); ++input_ind) {
inputs.push_back(node.get_input(input_ind));
inputs.push_back(node.get_input(static_cast<int>(input_ind)));
}
auto einsum = make_shared<Einsum>(inputs, equation);

View File

@@ -43,6 +43,7 @@ template OutputVector translate_unary_op<Cosh>(const NodeContext& node);
template OutputVector translate_unary_op<Erf>(const NodeContext& node);
template OutputVector translate_unary_op<Exp>(const NodeContext& node);
template OutputVector translate_unary_op<Floor>(const NodeContext& node);
template OutputVector translate_unary_op<HSwish>(const NodeContext& node);
template OutputVector translate_unary_op<opset10::IsFinite>(const NodeContext& node);
template OutputVector translate_unary_op<opset10::IsInf>(const NodeContext& node);
template OutputVector translate_unary_op<opset10::IsNaN>(const NodeContext& node);

View File

@@ -174,7 +174,7 @@ static void convert_binary_to_default_order(const shared_ptr<Node>& binary,
// instead of a transpose
shared_ptr<Node> new_node;
auto left_rank = get_static_rank(left);
if (left_rank < perm_to_def.size() && left.get_partial_shape().is_static()) {
if (left_rank < static_cast<int64_t>(perm_to_def.size()) && left.get_partial_shape().is_static()) {
auto left_shape = left.get_shape();
left_shape.insert(left_shape.begin(), perm_to_def.size() - left_rank, 1);

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2022 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -0,0 +1,9 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
add_subdirectory(src)
if(ENABLE_TESTS)
add_subdirectory(tests)
endif()

View File

@@ -0,0 +1,37 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/extension/conversion.hpp"
#include "openvino/frontend/frontend.hpp"
#include "openvino/frontend/tensorflow_lite/visibility.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class TENSORFLOW_LITE_API ConversionExtension : public ConversionExtensionBase {
public:
using Ptr = std::shared_ptr<ConversionExtension>;
ConversionExtension() = delete;
ConversionExtension(const std::string& op_type, const ov::frontend::CreatorFunction& converter)
: ConversionExtensionBase(op_type),
m_converter(converter) {}
const ov::frontend::CreatorFunction& get_converter() const {
return m_converter;
}
~ConversionExtension() override;
private:
ov::frontend::CreatorFunction m_converter;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,18 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/extension/op.hpp"
#include "openvino/frontend/tensorflow_lite/extension/conversion.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
template <typename OVOpType = void>
using OpExtension = ov::frontend::OpExtensionBase<ov::frontend::tensorflow_lite::ConversionExtension, OVOpType>;
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,79 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <functional>
#include <map>
#include "openvino/core/any.hpp"
#include "openvino/frontend/extension/decoder_transformation.hpp"
#include "openvino/frontend/extension/telemetry.hpp"
#include "openvino/frontend/frontend.hpp"
#include "openvino/frontend/tensorflow_lite/extension/conversion.hpp"
#include "openvino/frontend/tensorflow_lite/node_context.hpp"
#include "openvino/frontend/tensorflow_lite/visibility.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
using CreatorFunction = std::function<ov::OutputVector(const ov::frontend::tensorflow_lite::NodeContext&)>;
using TranslatorDictionaryType = std::map<std::string, CreatorFunction>;
class TENSORFLOW_LITE_API FrontEnd : public ov::frontend::FrontEnd {
public:
FrontEnd();
/// \brief Completely convert the model
/// \return fully converted ov Model
std::shared_ptr<ov::Model> convert(const ov::frontend::InputModel::Ptr& model) const override;
/// \brief Completely convert the remaining, not converted part of a function.
/// \param partiallyConverted partially converted ov Model
void convert(const std::shared_ptr<Model>& partiallyConverted) const override;
/// \brief Convert only those parts of the model that can be converted leaving others
/// as-is. Converted parts are not normalized by additional transformations; normalize
/// function or another form of convert function should be called to finalize the
/// conversion process.
/// \param model Input model
/// \return partially converted ov Model
std::shared_ptr<Model> convert_partially(const ov::frontend::InputModel::Ptr& model) const override;
/// \brief Convert operations with one-to-one mapping with decoding nodes.
/// Each decoding node is an ov node representing a single TFLite operation node with
/// all attributes represented in FW-independent way.
/// \param model Input model
/// \return ov Model after decoding
std::shared_ptr<Model> decode(const ov::frontend::InputModel::Ptr& model) const override;
/// \brief Runs normalization passes on function that was loaded with partial conversion
/// \param Model partially converted ov Model
void normalize(const std::shared_ptr<ov::Model>& function) const override;
/// \brief Gets name of this FrontEnd. Can be used by clients
std::string get_name() const override {
return "tflite";
}
void add_extension(const std::shared_ptr<ov::Extension>& extension) override;
protected:
/// \brief Check if FrontEndTensorflowLite can recognize model from given parts
bool supported_impl(const std::vector<ov::Any>& variants) const override;
ov::frontend::InputModel::Ptr load_impl(const std::vector<ov::Any>& variants) const override;
void translate_graph(const ov::frontend::InputModel::Ptr& model,
bool fail_fast,
bool no_conversion,
std::shared_ptr<ov::Model>& ng_function) const;
TelemetryExtension::Ptr m_telemetry;
std::vector<DecoderTransformationExtension::Ptr> m_transformation_extensions;
std::vector<ConversionExtensionBase::Ptr> m_conversion_extensions;
TranslatorDictionaryType m_op_translators;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,67 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/core/any.hpp"
#include "openvino/frontend/decoder.hpp"
#include "openvino/frontend/node_context.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
/// Keep necessary data for a single node in the original FW graph to facilitate
/// conversion process in the rules code.
class TENSORFLOW_LITE_API NodeContext : public ov::frontend::NodeContext {
public:
using Ptr = std::shared_ptr<NodeContext>;
NodeContext(const std::shared_ptr<DecoderBase>& decoder, const OutputVector& inputs)
: ov::frontend::NodeContext(decoder->get_op_type()),
m_decoder(decoder),
m_inputs(inputs) {}
/// Detects if there is at least one input attached with a given name
bool has_input(const size_t& port_index) const {
return port_index < m_inputs.size();
}
Output<Node> get_input(int port_index) const override {
return m_inputs.at(port_index);
}
OutputVector get_inputs() const {
return m_inputs;
}
size_t get_input_size() const override {
return m_inputs.size();
}
/// \brief Get a node name
const std::string& get_name() const override {
return m_decoder->get_op_name();
}
/// \brief Get a decoder
std::shared_ptr<DecoderBase> get_decoder() const {
return m_decoder;
}
ov::Any get_attribute_as_any(const std::string& name) const override {
auto res = m_decoder->get_attribute(name);
return res;
}
private:
std::shared_ptr<DecoderBase> m_decoder;
const OutputVector& m_inputs;
};
using CreatorFunction = std::function<ov::OutputVector(const ov::frontend::tensorflow_lite::NodeContext&)>;
using TranslatorDictionaryType = std::map<std::string, CreatorFunction>;
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,20 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "openvino/frontend/exception.hpp"
#ifdef OPENVINO_STATIC_LIBRARY
# define TENSORFLOW_LITE_API
# define TENSORFLOW_LITE_C_API
#else
# ifdef openvino_tensorflow_lite_frontend_EXPORTS
# define TENSORFLOW_LITE_API OPENVINO_CORE_EXPORTS
# define TENSORFLOW_LITE_C_API OPENVINO_EXTERN_C OPENVINO_CORE_EXPORTS
# else
# define TENSORFLOW_LITE_API OPENVINO_CORE_IMPORTS
# define TENSORFLOW_LITE_C_API OPENVINO_EXTERN_C OPENVINO_CORE_IMPORTS
# endif // openvino_tensorflow_lite_frontend_EXPORTS
#endif // OPENVINO_STATIC_LIBRARY

View File

@@ -0,0 +1,12 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
ie_add_compiler_flags(/wd4267)
endif()
ov_add_frontend(NAME tensorflow_lite
LINKABLE_FRONTEND
FILEDESCRIPTION "FrontEnd to load and convert TensorFlow Lite file format"
LINK_LIBRARIES openvino::core::dev openvino::frontend::tensorflow_common)

View File

@@ -0,0 +1,89 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "decoder_flatbuffer.h"
#include "schema_generated.h"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
size_t DecoderFlatBuffer::get_input_size() const {
return m_input_info.size();
}
void DecoderFlatBuffer::get_input_node(size_t input_port_idx,
std::string& producer_name,
size_t& producer_output_port_index) const {
const auto inputs = m_node_def->inputs();
FRONT_END_GENERAL_CHECK(inputs->size() > input_port_idx,
"Input port index is out of range for node ",
get_op_name(),
". Requested input index: ",
input_port_idx,
". Number of inputs: ",
inputs->size());
auto input_tensor_idx = (*inputs)[input_port_idx];
auto tensor = m_input_info.at(input_port_idx).tensor;
std::string name = (*tensor).name()->str();
producer_name = name;
producer_output_port_index = input_tensor_idx;
}
const std::string& DecoderFlatBuffer::get_op_type() const {
return m_type;
}
const std::string& DecoderFlatBuffer::get_op_name() const {
return m_name;
}
size_t DecoderFlatBuffer::get_output_size() const {
return m_node_def->outputs()->size();
}
std::string DecoderFlatBuffer::get_input_tensor_name(size_t idx) const {
FRONT_END_GENERAL_CHECK(idx < get_input_size(), "Requested input is out-of-range");
return m_input_info.at(idx).tensor->name()->str();
}
std::string DecoderFlatBuffer::get_output_tensor_name(size_t idx) const {
FRONT_END_GENERAL_CHECK(idx < get_output_size(), "Requested output is out-of-range");
return m_output_info.at(idx).tensor->name()->str();
}
std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace> DecoderFlatBuffer::decode_input_tensor(
size_t idx,
const InputModel& model) const {
FRONT_END_GENERAL_CHECK(idx < get_input_size(), "Requested input is out-of-range");
return decode_tensor(m_input_info.at(idx), model);
}
std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace> DecoderFlatBuffer::decode_output_tensor(
size_t idx,
const InputModel& model) const {
FRONT_END_GENERAL_CHECK(idx < get_output_size(), "Requested output is out-of-range");
return decode_tensor(m_output_info.at(idx), model);
}
std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace> DecoderFlatBuffer::decode_tensor(
const ov::frontend::tensorflow_lite::TensorInfo& tensor_info,
const InputModel& model) const {
const auto tensor = tensor_info.tensor;
std::vector<std::string> names = {tensor->name()->str()};
return std::make_shared<ov::frontend::tensorflow_lite::TensorLitePlace>(
model,
ov::frontend::tensorflow_lite::get_ov_shape(tensor->shape()),
ov::frontend::tensorflow_lite::get_ov_type(tensor->type()),
names,
ov::frontend::tensorflow_lite::get_quantization(tensor->quantization()),
tensor_info.input_idx,
tensor_info.output_idx,
(tensor_info.buffer->data() ? tensor_info.buffer->data()->data() : nullptr));
}
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,68 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include <vector>
#include "tensor_lite_place.hpp"
#include "graph_iterator_flatbuffer.hpp"
#include "openvino/frontend/tensorflow_lite/visibility.hpp"
#include "openvino/frontend/decoder.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class TensorLitePlace;
struct TensorInfo;
class DecoderFlatBuffer : public ov::frontend::DecoderBase {
public:
explicit DecoderFlatBuffer(const tflite::Operator* node_def,
const std::string& type,
const std::string& name,
std::map<size_t, ov::frontend::tensorflow_lite::TensorInfo> input_info,
std::map<size_t, ov::frontend::tensorflow_lite::TensorInfo> output_info)
: m_node_def(node_def), m_type(type), m_name(name), m_input_info(input_info), m_output_info(output_info) {}
template<class Ret, class Class>
Ret get_attribute(Ret (Class::*member)() const) const {
const auto opts = m_node_def->builtin_options_as<Class>();
FRONT_END_GENERAL_CHECK(opts != nullptr, "Chosen Builtin Option is not accessible for this node");
return (opts->*member)();
}
ov::Any get_attribute(const std::string& name) const override {
return {};
}
size_t get_input_size() const override;
size_t get_output_size() const;
void get_input_node(size_t input_port_idx,
std::string& producer_name,
size_t& producer_output_port_index) const override;
std::string get_output_tensor_name(size_t idx) const;
std::string get_input_tensor_name(size_t idx) const;
const std::string& get_op_type() const override;
const std::string& get_op_name() const override;
std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace> decode_input_tensor(size_t idx, const InputModel& model) const;
std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace> decode_output_tensor(size_t idx, const InputModel& model) const;
private:
std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace> decode_tensor(
const ov::frontend::tensorflow_lite::TensorInfo& tensor_info, const InputModel& model) const;
const tflite::Operator* m_node_def;
std::string m_type, m_name;
std::map<size_t, ov::frontend::tensorflow_lite::TensorInfo> m_input_info, m_output_info;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,87 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <utility>
#include "openvino/core/any.hpp"
#include "openvino/frontend/decoder.hpp"
#include "openvino/frontend/tensorflow_lite/visibility.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class DecoderMap : public ov::frontend::DecoderBase {
public:
DecoderMap(std::shared_ptr<ov::frontend::DecoderBase> decoder,
const std::map<std::string, ov::Any>& attrs,
bool empty_name = false)
: ov::frontend::DecoderBase(),
m_decoder(std::move(decoder)),
m_attrs(attrs),
m_empty_name(empty_name) {}
DecoderMap(std::shared_ptr<ov::frontend::DecoderBase> decoder,
const std::map<std::string, ov::Any>& attrs,
std::string type,
bool empty_name = false)
: ov::frontend::DecoderBase(),
m_decoder(std::move(decoder)),
m_attrs(attrs),
m_type(type),
m_empty_name(empty_name) {}
/// \brief Get attribute value by name
///
/// \param name Attribute name
/// \return Shared pointer to appropriate value converted to openvino data type if it exists, 'nullptr' otherwise
ov::Any get_attribute(const std::string& name) const override {
FRONT_END_GENERAL_CHECK(m_attrs.count(name), "DecoderMap was requested attribute that doesn't exist: ", name);
return m_attrs.at(name);
}
/// \brief Get a number of inputs
size_t get_input_size() const override {
return m_decoder->get_input_size();
}
/// \brief Get a producer name and its output port index
///
/// \param input_port_idx Input port index by which data is consumed
/// \param producer_name A producer name
/// \return producer_output_port_index Output port index from which data is generated
void get_input_node(size_t input_port_idx,
std::string& producer_name,
size_t& producer_output_port_index) const override {
m_decoder->get_input_node(input_port_idx, producer_name, producer_output_port_index);
}
/// \brief Get operation type
const std::string& get_op_type() const override {
if (m_type.empty())
return m_decoder->get_op_type();
return m_type;
}
/// \brief Get node name
const std::string& get_op_name() const override {
return m_empty_name ? empty_name : m_decoder->get_op_name();
}
/// \brief Destructor
~DecoderMap() = default;
private:
std::map<std::string, ov::Any> m_attrs;
std::shared_ptr<ov::frontend::DecoderBase> m_decoder;
std::string m_type;
const std::string empty_name;
bool m_empty_name;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,9 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/tensorflow_lite/extension/conversion.hpp"
using namespace ov::frontend::tensorflow_lite;
ConversionExtension::~ConversionExtension() = default;

View File

@@ -0,0 +1,297 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/tensorflow_lite/frontend.hpp"
#include "graph_iterator_flatbuffer.hpp"
#include "input_model.hpp"
#include "op/op_translation_utils.hpp"
#include "op_table.hpp"
#include "openvino/frontend/tensorflow_lite/extension/op.hpp"
#include "openvino/util/common_util.hpp"
#include "pass/transpose_sinking.hpp"
#include "so_extension.hpp"
#include "tensor_lite_place.hpp"
#include "tf_framework_node.hpp"
#include "transformations/common_optimizations/transpose_sinking.hpp"
using namespace ov;
using namespace ov::frontend::tensorflow_lite;
namespace {
void translate_framework_node(const std::shared_ptr<ov::frontend::tensorflow::FrameworkNode>& node,
const ov::frontend::tensorflow_lite::TranslatorDictionaryType& op_translators) {
auto type = node->get_op_type();
const auto& TRANSLATE_OP_MAP = op_translators;
auto translator_it = TRANSLATE_OP_MAP.find(type);
FRONT_END_OP_CONVERSION_CHECK(translator_it != TRANSLATE_OP_MAP.end(), "No translator found for ", type, " node.");
ov::OutputVector ov_inputs = node->input_values();
ov::frontend::tensorflow_lite::NodeContext node_ctx(node->get_decoder(), ov_inputs);
auto new_node_outputs = translator_it->second(node_ctx);
ov::frontend::tensorflow_lite::op::set_output_names(node_ctx, new_node_outputs);
auto new_output = new_node_outputs.begin();
auto old_outputs = node->outputs();
auto old_output = old_outputs.begin();
for (; new_output != new_node_outputs.end() && old_output != old_outputs.end(); ++old_output, ++new_output) {
old_output->replace(*new_output);
apply_quantization(*new_output);
}
}
} // namespace
FrontEnd::FrontEnd() {
m_op_translators = ov::frontend::tensorflow_lite::op::get_supported_ops();
}
/// \brief Check if FrontEndTensorflowLite can recognize model from given parts
bool FrontEnd::supported_impl(const std::vector<ov::Any>& variants) const {
if (variants.size() != 1)
return false;
if (variants[0].is<std::string>()) {
std::string suffix = ".tflite";
std::string model_path = variants[0].as<std::string>();
if (ov::util::ends_with(model_path, suffix.c_str())) {
return true;
}
}
#if defined(OPENVINO_ENABLE_UNICODE_PATH_SUPPORT) && defined(_WIN32)
else if (variants[0].is<std::wstring>()) {
std::wstring suffix = L".tflite";
std::wstring model_path = variants[0].as<std::wstring>();
if (ov::util::ends_with(model_path, suffix)) {
return true;
}
}
#endif
return false;
}
ov::frontend::InputModel::Ptr FrontEnd::load_impl(const std::vector<ov::Any>& variants) const {
if (variants.size() == 1) {
if (variants[0].is<std::string>()) {
std::string suffix = ".tflite";
std::string model_path = variants[0].as<std::string>();
if (ov::util::ends_with(model_path, suffix.c_str())) {
return std::make_shared<tensorflow_lite::InputModel>(
std::make_shared<GraphIteratorFlatBuffer>(model_path),
m_telemetry);
}
}
#if defined(OPENVINO_ENABLE_UNICODE_PATH_SUPPORT) && defined(_WIN32)
else if (variants[0].is<std::wstring>()) {
std::wstring suffix = L".tflite";
std::wstring model_path = variants[0].as<std::wstring>();
if (ov::util::ends_with(model_path, suffix)) {
return std::make_shared<tensorflow_lite::InputModel>(
std::make_shared<GraphIteratorFlatBuffer>(model_path),
m_telemetry);
}
}
#endif
}
return nullptr;
}
std::shared_ptr<ov::Model> FrontEnd::convert(const ov::frontend::InputModel::Ptr& model) const {
std::shared_ptr<ov::Model> ov_model;
if (!m_transformation_extensions.empty()) {
auto ov_model = decode(model);
ov::pass::Manager manager;
for (const auto& transformation : m_transformation_extensions) {
transformation->register_pass(manager);
}
manager.run_passes(ov_model);
convert(ov_model);
return ov_model;
}
translate_graph(model, true, false, ov_model);
normalize(ov_model);
for (const auto& node : ov_model->get_ordered_ops()) {
if (const auto& fw_node = ov::as_type_ptr<ov::frontend::tensorflow::FrameworkNode>(node)) {
auto op_type = fw_node->get_decoder()->get_op_type();
auto op_name = fw_node->get_decoder()->get_op_name();
FRONT_END_OP_CONVERSION_CHECK(false,
"The translation is incomplete due to operation ",
op_name,
" of type ",
op_type);
}
}
return ov_model;
}
void FrontEnd::convert(const std::shared_ptr<ov::Model>& partiallyConverted) const {
for (const auto& node : partiallyConverted->get_ordered_ops()) {
if (ov::is_type<ov::frontend::tensorflow::FrameworkNode>(node)) {
translate_framework_node(std::dynamic_pointer_cast<ov::frontend::tensorflow::FrameworkNode>(node),
m_op_translators);
}
}
for (const auto& result : partiallyConverted->get_results()) {
result->validate_and_infer_types();
}
normalize(partiallyConverted);
}
std::shared_ptr<ov::Model> FrontEnd::convert_partially(const ov::frontend::InputModel::Ptr& model) const {
if (!m_transformation_extensions.empty()) {
auto function = decode(model);
ov::pass::Manager manager;
for (const auto& transformation : m_transformation_extensions) {
transformation->register_pass(manager);
}
manager.run_passes(function);
convert(function);
return function;
}
std::shared_ptr<ov::Model> f;
translate_graph(model, false, false, f);
normalize(f);
return f;
}
void FrontEnd::translate_graph(const InputModel::Ptr& model,
bool fail_fast,
bool no_conversion,
std::shared_ptr<ov::Model>& ov_function) const {
const auto& model_lite = std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::InputModel>(model);
FRONT_END_GENERAL_CHECK(model_lite, "nullptr for InputModel is given for translation into OV Model");
const auto& translate_map =
no_conversion ? ov::frontend::tensorflow_lite::TranslatorDictionaryType{} : m_op_translators;
auto all_tensor_values = model_lite->get_tensor_values();
auto all_tensor_places = model_lite->get_tensor_places();
for (auto& value : all_tensor_values) {
auto& output = value.second;
FRONT_END_GENERAL_CHECK(ov::is_type<ov::opset1::Constant>(output.get_node_shared_ptr()),
"Unexpected constant data configuration at the beginning of graph translation");
const auto& input_tensor = all_tensor_places.at(value.first);
FRONT_END_GENERAL_CHECK(input_tensor != nullptr, "Inputs must be TensorPlaces");
input_tensor->translate(output, !no_conversion);
}
// inputs
ParameterVector parameters;
parameters.reserve(model_lite->get_inputs().size());
for (const auto& input : model_lite->get_inputs()) {
const auto& input_tensor = std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(input);
FRONT_END_GENERAL_CHECK(
input_tensor != nullptr,
"Inputs of ov::frontend::tensorflow_lite::InputModel must be TensorLitePlace instances");
const auto name = input_tensor->get_names()[0];
auto parameter = std::make_shared<ov::opset1::Parameter>(input_tensor->get_element_type(),
input_tensor->get_partial_shape());
parameter->set_friendly_name(name);
parameters.push_back(parameter);
all_tensor_values[name] = parameter->output(0);
input_tensor->translate(all_tensor_values[name], !no_conversion);
}
// operations
for (const auto& op_place : model_lite->get_op_places()) {
const auto& decoder = std::dynamic_pointer_cast<tensorflow_lite::DecoderFlatBuffer>(op_place->get_decoder());
FRONT_END_GENERAL_CHECK(decoder != nullptr, "Decoder must be DecoderFlatBuffer or its child");
ov::OutputVector inputs(decoder->get_input_size());
for (size_t i = 0; i < decoder->get_input_size(); ++i) {
auto name = decoder->get_input_tensor_name(i);
FRONT_END_GENERAL_CHECK(all_tensor_values.find(name) != all_tensor_values.end(),
"Unknown tensor name: ",
name,
".");
inputs[i] = all_tensor_values[name];
}
const auto& out_size = decoder->get_output_size();
ov::OutputVector ov_outputs(out_size);
try {
FRONT_END_OP_CONVERSION_CHECK(translate_map.count(decoder->get_op_type()),
"No translator found for " + decoder->get_op_type() + " node.");
auto op_fun = &(translate_map.at(decoder->get_op_type()));
ov::frontend::tensorflow_lite::NodeContext node_context(decoder, inputs);
ov_outputs = (*op_fun)(node_context);
} catch (...) {
if (fail_fast) {
if (m_telemetry && translate_map.count(decoder->get_op_type()) == 0) {
m_telemetry->send_event("error_cause", "tflite_" + decoder->get_op_type());
}
throw;
} else {
auto operation = std::make_shared<ov::frontend::tensorflow::FrameworkNode>(decoder, inputs, out_size);
operation->set_friendly_name(decoder->get_op_name());
ov_outputs = operation->outputs();
}
}
for (size_t i = 0; i < out_size; ++i) {
const auto& name = decoder->get_output_tensor_name(i);
all_tensor_values[name] = ov_outputs[i];
all_tensor_places[name]->translate(all_tensor_values[name], !no_conversion);
}
}
// outputs
ResultVector results;
results.reserve(model_lite->get_outputs().size());
for (const auto& output : model_lite->get_outputs()) {
const auto& tensor = std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(output);
FRONT_END_GENERAL_CHECK(
tensor != nullptr,
"Inputs of ov::frontend::tensorflow_lite::InputModel must be TensorLitePlace instances");
const auto name = tensor->get_names()[0];
const auto& output_value = all_tensor_values[name];
const auto& result = std::make_shared<ov::opset1::Result>(output_value);
auto input = result->output(0);
tensor->translate(input, !no_conversion);
results.push_back(result);
}
auto model_name = "TensorFlow_Lite_Frontend_IR";
ov_function = std::make_shared<ov::Model>(results, parameters, model_name);
}
std::shared_ptr<ov::Model> FrontEnd::decode(const InputModel::Ptr& model) const {
std::shared_ptr<ov::Model> ov_model;
translate_graph(model, false, true, ov_model);
return ov_model;
}
void FrontEnd::normalize(const std::shared_ptr<ov::Model>& function) const {
ov::pass::Manager manager;
// TODO: register i8 weights normalization after implemented
// TODO: remove custom transpose sinking after common TS ready
manager.register_pass<ov::pass::TransposeSinking>();
manager.register_pass<ov::frontend::tensorflow::pass::TransposeSinking>();
manager.run_passes(function);
}
void FrontEnd::add_extension(const std::shared_ptr<ov::Extension>& extension) {
if (auto telemetry = std::dynamic_pointer_cast<TelemetryExtension>(extension)) {
m_telemetry = telemetry;
} else if (auto transformation = std::dynamic_pointer_cast<DecoderTransformationExtension>(extension)) {
m_transformation_extensions.push_back(transformation);
} else if (const auto& so_ext = std::dynamic_pointer_cast<ov::detail::SOExtension>(extension)) {
add_extension(so_ext->extension());
m_extensions.push_back(so_ext);
} else if (auto common_conv_ext = std::dynamic_pointer_cast<ov::frontend::ConversionExtension>(extension)) {
m_conversion_extensions.push_back(common_conv_ext);
m_op_translators[common_conv_ext->get_op_type()] = [=](const NodeContext& context) {
return common_conv_ext->get_converter()(context);
};
} else if (const auto& tensorflow_conv_ext =
std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::ConversionExtension>(extension)) {
m_conversion_extensions.push_back(tensorflow_conv_ext);
m_op_translators[tensorflow_conv_ext->get_op_type()] = [=](const NodeContext& context) {
return tensorflow_conv_ext->get_converter()(context);
};
}
}

View File

@@ -0,0 +1,88 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <graph_iterator_flatbuffer.hpp>
using namespace ov::frontend::tensorflow_lite;
#ifdef OPENVINO_ENABLE_UNICODE_PATH_SUPPORT
GraphIteratorFlatBuffer::GraphIteratorFlatBuffer(const std::wstring& path)
: GraphIteratorFlatBuffer(ov::util::wstring_to_string(path)) {}
#endif // OPENVINO_ENABLE_UNICODE_PATH_SUPPORT
GraphIteratorFlatBuffer::GraphIteratorFlatBuffer(const std::string& path) {
std::ifstream model_file;
model_file.open(path, std::ios::binary | std::ios::in);
FRONT_END_GENERAL_CHECK(model_file && model_file.is_open(), "Model file does not exist: ", path);
model_file.seekg(0, std::ios::end);
auto length = model_file.tellg();
model_file.seekg(0, std::ios::beg);
char* data = new char[length];
model_file.read(data, length);
model_file.close();
m_model = std::shared_ptr<tflite::Model>(tflite::GetMutableModel(data), [](tflite::Model* p) {});
const auto subgraphs = m_model->subgraphs();
FRONT_END_GENERAL_CHECK(subgraphs->size() == 1,
"Number of sub-graphs in the model is ",
subgraphs->size(),
". Supported number of sub-graphs is 1.");
const auto graph = *subgraphs->begin();
const auto operators = graph->operators();
m_nodes = {operators->begin(), operators->end()};
}
std::shared_ptr<DecoderFlatBuffer> GraphIteratorFlatBuffer::get_decoder() const {
auto inputs_vec = (*m_model->subgraphs()->begin())->inputs();
auto outputs_vec = (*m_model->subgraphs()->begin())->outputs();
auto inputs = std::set<int32_t>{inputs_vec->begin(), inputs_vec->end()};
auto outputs = std::set<int32_t>{outputs_vec->begin(), outputs_vec->end()};
auto buffers = m_model->buffers();
auto tensors = m_model->subgraphs()->begin()->tensors();
std::map<size_t, TensorInfo> input_info = {}, output_info = {};
size_t i = 0;
for (auto input : *m_nodes[node_index]->inputs()) {
if (input == -1) {
continue;
}
auto buffer = (*buffers)[(*tensors)[input]->buffer()];
auto is_input = inputs.find(input) != inputs.end();
int64_t input_idx =
!is_input ? -1 : std::find(inputs_vec->begin(), inputs_vec->end(), input) - inputs_vec->begin();
auto is_output = outputs.find(input) != outputs.end();
int64_t output_idx =
!is_output ? -1 : std::find(outputs_vec->begin(), outputs_vec->end(), input) - outputs_vec->begin();
input_info[i++] = TensorInfo{input_idx, output_idx, (*tensors)[input], buffer};
}
i = 0;
// If we have any m_nodes[node_index]->intermediates() than trigger internal smth? no
// put all the info in Decoder as a sub-graph!
for (auto output : *m_nodes[node_index]->outputs()) {
auto buffer = (*buffers)[(*tensors)[output]->buffer()];
auto is_output = outputs.find(output) != outputs.end();
int64_t output_idx =
!is_output ? -1 : std::find(outputs_vec->begin(), outputs_vec->end(), output) - outputs_vec->begin();
output_info[i++] = TensorInfo{-1, output_idx, (*tensors)[output], buffer};
}
auto op_codes = m_model->operator_codes();
auto operator_code = (*op_codes)[m_nodes[node_index]->opcode_index()];
std::string type;
if (operator_code->deprecated_builtin_code() <
tflite::BuiltinOperator::BuiltinOperator_PLACEHOLDER_FOR_GREATER_OP_CODES) {
type = tflite::EnumNamesBuiltinOperator()[operator_code->deprecated_builtin_code()];
} else {
type = tflite::EnumNamesBuiltinOperator()[operator_code->builtin_code()];
}
return std::make_shared<DecoderFlatBuffer>(m_nodes[node_index],
type,
std::to_string(node_index),
input_info,
output_info);
}

View File

@@ -0,0 +1,65 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <fstream>
#include "decoder_flatbuffer.h"
#include "openvino/frontend/exception.hpp"
#include "openvino/util/file_util.hpp"
#include "schema_generated.h"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class DecoderFlatBuffer;
struct TensorInfo {
int64_t input_idx, output_idx;
const tflite::Tensor* tensor;
const tflite::Buffer* buffer;
};
class GraphIteratorFlatBuffer {
size_t node_index = 0;
std::vector<const tflite::Operator*> m_nodes;
std::shared_ptr<tflite::Model> m_model;
public:
explicit GraphIteratorFlatBuffer(const std::string& path);
#ifdef OPENVINO_ENABLE_UNICODE_PATH_SUPPORT
explicit GraphIteratorFlatBuffer(const std::wstring& path);
#endif
using Ptr = std::shared_ptr<GraphIteratorFlatBuffer>;
~GraphIteratorFlatBuffer() = default;
/// Set iterator to the start position
void reset() {
node_index = 0;
}
size_t size() const {
return m_nodes.size();
}
/// Moves to the next node in the graph
void next() {
node_index++;
}
bool is_end() const {
return node_index >= m_nodes.size();
}
/// Return Decoder for the current node that iterator points to
std::shared_ptr<ov::frontend::tensorflow_lite::DecoderFlatBuffer> get_decoder() const;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,389 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "input_model.hpp"
#include <iterator>
#include <queue>
#include "openvino/frontend/exception.hpp"
#include "openvino/opsets/opset10.hpp"
#include "openvino/util/log.hpp"
#include "tensor_lite_place.hpp"
#include "utils.hpp"
using namespace ov::frontend::tensorflow;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class InputModel::InputModelTFLiteImpl {
public:
InputModelTFLiteImpl(const GraphIteratorFlatBuffer::Ptr& graph_iterator,
const ov::frontend::InputModel& input_model);
InputModelTFLiteImpl(const GraphIteratorFlatBuffer::Ptr& graph_iterator,
const ov::frontend::InputModel& input_model,
const std::shared_ptr<TelemetryExtension>& telemetry);
std::vector<ov::frontend::Place::Ptr> getInputs() const;
std::vector<ov::frontend::Place::Ptr> getOutputs() const;
ov::frontend::Place::Ptr getPlaceByTensorName(const std::string& tensorName) const;
///// Searching for places /////
std::vector<std::shared_ptr<OpPlace>> get_op_places() const {
return m_op_places;
}
std::map<std::string, std::shared_ptr<TensorLitePlace>> get_tensor_places() const {
return m_tensor_places;
}
std::map<std::string, Output<Node>> get_tensor_values() const {
return m_tensor_values;
}
///// Naming and annotation /////
void setNameForTensor(const Place::Ptr& tensor, const std::string& new_name);
void addNameForTensor(const Place::Ptr& tensor, const std::string& new_name);
void setNameForOperation(const Place::Ptr& operation, const std::string& new_name);
///// Setting / getting tensor properties /////
void setPartialShape(ov::frontend::Place::Ptr place, const ov::PartialShape& shape);
ov::PartialShape getPartialShape(ov::frontend::Place::Ptr place) const;
void setElementType(ov::frontend::Place::Ptr place, const ov::element::Type& type);
ov::element::Type getElementType(ov::frontend::Place::Ptr place) const;
void setTensorValue(ov::frontend::Place::Ptr place, const void* value);
///// Topology Editing /////
void overrideAllOutputs(const std::vector<ov::frontend::Place::Ptr>& outputs);
void overrideAllInputs(const std::vector<ov::frontend::Place::Ptr>& inputs);
void extractSubgraph(const std::vector<ov::frontend::Place::Ptr>& inputs,
const std::vector<ov::frontend::Place::Ptr>& outputs);
private:
void loadModel();
void cleanUp();
std::vector<std::shared_ptr<OpPlace>> m_op_places;
std::map<std::string, std::shared_ptr<OpPlace>> m_op_places_map;
std::map<std::string, std::shared_ptr<TensorLitePlace>> m_tensor_places;
std::vector<ov::frontend::Place::Ptr> m_inputs;
std::vector<ov::frontend::Place::Ptr> m_outputs;
std::map<std::string, Output<Node>> m_tensor_values;
std::shared_ptr<GraphIteratorFlatBuffer> m_graph_iterator;
const ov::frontend::InputModel& m_input_model;
std::shared_ptr<TelemetryExtension> m_telemetry;
};
void InputModel::InputModelTFLiteImpl::loadModel() {
std::map<std::string, uint64_t> op_statistics; // for telemetry
m_op_places.reserve(m_graph_iterator->size());
for (; !m_graph_iterator->is_end(); m_graph_iterator->next()) {
const auto& decoder = m_graph_iterator->get_decoder();
m_op_places.push_back(std::make_shared<OpPlace>(m_input_model, decoder));
if (m_telemetry) {
op_statistics[decoder->get_op_type()]++;
}
for (size_t i = 0; i < decoder->get_input_size(); ++i) {
auto place = decoder->decode_input_tensor(i, m_input_model);
auto name = place->get_names()[0];
if (m_tensor_places.find(name) == m_tensor_places.end()) {
m_tensor_places[name] = place;
if (place->is_input()) {
// will reorder by index later
m_inputs.push_back(place);
} else if (auto data = place->get_data()) {
auto constant = ov::op::v0::Constant::create(place->get_element_type(),
place->get_partial_shape().to_shape(),
data);
constant->set_friendly_name(name);
m_tensor_values[name] = constant;
} else {
FRONT_END_GENERAL_CHECK(false,
"This tensor should be either input, constant or ",
"should be already produced by previous operators: ",
name,
". Error is encountered while working with operation of type ",
decoder->get_op_type(),
" and name ",
decoder->get_op_name(),
".");
}
}
}
for (size_t i = 0; i < decoder->get_output_size(); ++i) {
auto place = decoder->decode_output_tensor(i, m_input_model);
auto name = place->get_names()[0];
if (m_tensor_places.find(name) == m_tensor_places.end()) {
m_tensor_places[name] = place;
if (place->is_output()) {
// will reorder by index later
m_outputs.push_back(place);
}
}
}
}
auto sorting_places_by_idx = [](bool are_input_places) {
return
[are_input_places](const ov::frontend::Place::Ptr& lhs_place, const ov::frontend::Place::Ptr& rhs_place) {
auto tflite_lhs_place =
std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(lhs_place);
auto tflite_rhs_place =
std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(rhs_place);
FRONT_END_GENERAL_CHECK(tflite_lhs_place != nullptr && tflite_rhs_place != nullptr,
"TFLite Frontend works with TensorLitePlaces only");
size_t rhs_idx, lhs_idx;
if (are_input_places) {
lhs_idx = tflite_lhs_place->get_input_index();
rhs_idx = tflite_rhs_place->get_input_index();
} else {
lhs_idx = tflite_lhs_place->get_output_index();
rhs_idx = tflite_rhs_place->get_output_index();
}
return lhs_idx < rhs_idx;
};
};
std::sort(m_inputs.begin(), m_inputs.end(), sorting_places_by_idx(true));
std::sort(m_outputs.begin(), m_outputs.end(), sorting_places_by_idx(false));
if (m_telemetry) {
for (const auto& op : op_statistics) {
m_telemetry->send_event("op_count", "tflite_" + op.first, static_cast<int>(op.second));
}
}
}
InputModel::InputModelTFLiteImpl::InputModelTFLiteImpl(const GraphIteratorFlatBuffer::Ptr& graph_iterator,
const ov::frontend::InputModel& input_model)
: m_input_model(input_model),
m_graph_iterator(graph_iterator) {
FRONT_END_GENERAL_CHECK(m_graph_iterator, "Null pointer specified for GraphIterator");
loadModel();
}
InputModel::InputModelTFLiteImpl::InputModelTFLiteImpl(const GraphIteratorFlatBuffer::Ptr& graph_iterator,
const ov::frontend::InputModel& input_model,
const std::shared_ptr<TelemetryExtension>& telemetry)
: m_input_model(input_model),
m_graph_iterator(graph_iterator),
m_telemetry(telemetry) {
FRONT_END_GENERAL_CHECK(m_graph_iterator, "Null pointer specified for GraphIterator");
loadModel();
}
std::vector<ov::frontend::Place::Ptr> InputModel::InputModelTFLiteImpl::getInputs() const {
return m_inputs;
}
std::vector<ov::frontend::Place::Ptr> InputModel::InputModelTFLiteImpl::getOutputs() const {
return m_outputs;
}
std::shared_ptr<TensorPlace> castToTensorPlace(const ov::frontend::Place::Ptr& place) {
if (auto var_place = std::dynamic_pointer_cast<TensorPlace>(place)) {
return var_place;
}
FRONT_END_GENERAL_CHECK(false, "Cannot cast this Place to TensorPlace.");
}
ov::frontend::Place::Ptr InputModel::InputModelTFLiteImpl::getPlaceByTensorName(const std::string& tensorName) const {
if (m_tensor_places.find(tensorName) != m_tensor_places.end())
return castToTensorPlace(m_tensor_places.at(tensorName));
else
return nullptr;
}
std::shared_ptr<OpPlace> castToOpPlace(const ov::frontend::Place::Ptr& place) {
if (auto var_place = std::dynamic_pointer_cast<OpPlace>(place)) {
return var_place;
}
FRONT_END_GENERAL_CHECK(false, "Cannot cast this Place to TensorPlace.");
}
void InputModel::InputModelTFLiteImpl::setPartialShape(ov::frontend::Place::Ptr place, const PartialShape& shape) {
castToTensorPlace(place)->set_partial_shape(shape);
}
ov::PartialShape InputModel::InputModelTFLiteImpl::getPartialShape(ov::frontend::Place::Ptr place) const {
return castToTensorPlace(place)->get_partial_shape();
}
void InputModel::InputModelTFLiteImpl::setElementType(ov::frontend::Place::Ptr place, const element::Type& type) {
castToTensorPlace(place)->set_element_type(type);
}
ov::element::Type InputModel::InputModelTFLiteImpl::getElementType(ov::frontend::Place::Ptr place) const {
return castToTensorPlace(place)->get_element_type();
}
void InputModel::InputModelTFLiteImpl::setTensorValue(ov::frontend::Place::Ptr place, const void* value) {
auto tensor_place = castToTensorPlace(place);
auto p_shape = tensor_place->get_partial_shape();
auto type = tensor_place->get_element_type();
FRONT_END_GENERAL_CHECK(tensor_place->get_names().size() > 0,
"TensorFlow Lite Frontend: place to be frozen must have the name.");
auto name = tensor_place->get_names()[0];
FRONT_END_GENERAL_CHECK(p_shape.is_static(),
"TensorFlow Lite Frontend: specify static shape for " + name + " to be frozen.");
FRONT_END_GENERAL_CHECK(type.is_static(),
"TensorFlow Lite Frontend: define static size type for " + name + " to be frozen.");
auto constant = opset10::Constant::create(type, p_shape.to_shape(), value);
constant->set_friendly_name(name);
m_tensor_values[name] = constant;
}
void InputModel::InputModelTFLiteImpl::setNameForTensor(const Place::Ptr& tensor, const std::string& new_name) {
castToTensorPlace(tensor)->set_names({new_name});
}
void InputModel::InputModelTFLiteImpl::addNameForTensor(const Place::Ptr& tensor, const std::string& new_name) {
auto tf_tensor = castToTensorPlace(tensor);
auto names = tf_tensor->get_names();
names.push_back(new_name);
tf_tensor->set_names(names);
}
void InputModel::InputModelTFLiteImpl::setNameForOperation(const Place::Ptr& operation, const std::string& new_name) {
auto op = castToOpPlace(operation);
auto names = op->get_names();
names.push_back(new_name);
op->set_names(names);
}
void InputModel::InputModelTFLiteImpl::overrideAllInputs(const std::vector<ov::frontend::Place::Ptr>& inputs) {
for (const auto& input_place : m_inputs) {
auto input_lite_place = std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(input_place);
FRONT_END_GENERAL_CHECK(input_lite_place != nullptr, "Input Model has unexpected place as input");
input_lite_place->set_input_index(-1);
}
m_inputs.clear();
for (const auto& input_place : inputs) {
m_inputs.push_back(castToTensorPlace(input_place));
}
cleanUp();
}
void InputModel::InputModelTFLiteImpl::overrideAllOutputs(const std::vector<ov::frontend::Place::Ptr>& outputs) {
for (const auto& output_place : m_outputs) {
auto output_lite_place =
std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(output_place);
FRONT_END_GENERAL_CHECK(output_lite_place != nullptr, "Input Model has unexpected place as output");
output_lite_place->set_output_index(-1);
}
m_outputs.clear();
for (const auto& output_place : outputs) {
m_outputs.push_back(castToTensorPlace(output_place));
}
cleanUp();
}
void InputModel::InputModelTFLiteImpl::extractSubgraph(const std::vector<ov::frontend::Place::Ptr>& inputs,
const std::vector<ov::frontend::Place::Ptr>& outputs) {
for (const auto& input_place : m_inputs) {
auto input_lite_place = std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(input_place);
FRONT_END_GENERAL_CHECK(input_lite_place != nullptr, "Input Model has unexpected place as input");
input_lite_place->set_input_index(-1);
}
m_inputs.clear();
for (const auto& input_place : inputs) {
m_inputs.push_back(castToTensorPlace(input_place));
}
for (const auto& output_place : m_outputs) {
auto output_lite_place =
std::dynamic_pointer_cast<ov::frontend::tensorflow_lite::TensorLitePlace>(output_place);
FRONT_END_GENERAL_CHECK(output_lite_place != nullptr, "Input Model has unexpected place as output");
output_lite_place->set_output_index(-1);
}
m_outputs.clear();
for (const auto& output_place : outputs) {
m_outputs.push_back(castToTensorPlace(output_place));
}
cleanUp();
}
void InputModel::InputModelTFLiteImpl::cleanUp() {
// TODO: remove all the unnecessary tensors and operations. Could be postponed as TF Lite is OOB type of FrontEnd
}
InputModel::InputModel(const GraphIteratorFlatBuffer::Ptr& graph_iterator,
const std::shared_ptr<TelemetryExtension>& telemetry)
: _impl{std::make_shared<InputModelTFLiteImpl>(graph_iterator, *this, telemetry)} {}
std::vector<std::shared_ptr<ov::frontend::tensorflow::OpPlace>> InputModel::get_op_places() const {
return _impl->get_op_places();
}
std::map<std::string, std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace>> InputModel::get_tensor_places()
const {
return _impl->get_tensor_places();
}
std::map<std::string, Output<Node>> InputModel::get_tensor_values() const {
return _impl->get_tensor_values();
}
std::vector<ov::frontend::Place::Ptr> InputModel::get_inputs() const {
return _impl->getInputs();
}
std::vector<ov::frontend::Place::Ptr> InputModel::get_outputs() const {
return _impl->getOutputs();
}
ov::frontend::Place::Ptr InputModel::get_place_by_tensor_name(const std::string& tensorName) const {
return _impl->getPlaceByTensorName(tensorName);
}
void InputModel::set_partial_shape(const Place::Ptr& place, const PartialShape& shape) {
_impl->setPartialShape(place, shape);
}
ov::PartialShape InputModel::get_partial_shape(const Place::Ptr& place) const {
return _impl->getPartialShape(place);
}
void InputModel::set_element_type(const Place::Ptr& place, const element::Type& type) {
_impl->setElementType(place, type);
}
ov::element::Type InputModel::get_element_type(const Place::Ptr& place) const {
return _impl->getElementType(place);
}
void InputModel::set_tensor_value(const Place::Ptr& place, const void* value) {
_impl->setTensorValue(place, value);
}
void InputModel::set_name_for_tensor(const Place::Ptr& tensor, const std::string& new_name) {
_impl->setNameForTensor(tensor, new_name);
}
void InputModel::add_name_for_tensor(const Place::Ptr& tensor, const std::string& new_name) {
_impl->addNameForTensor(tensor, new_name);
}
void InputModel::set_name_for_operation(const Place::Ptr& operation, const std::string& new_name) {
_impl->setNameForOperation(operation, new_name);
}
void InputModel::override_all_outputs(const std::vector<ov::frontend::Place::Ptr>& outputs) {
_impl->overrideAllOutputs(outputs);
}
void InputModel::override_all_inputs(const std::vector<ov::frontend::Place::Ptr>& inputs) {
_impl->overrideAllInputs(inputs);
}
void InputModel::extract_subgraph(const std::vector<ov::frontend::Place::Ptr>& inputs,
const std::vector<ov::frontend::Place::Ptr>& outputs) {
_impl->extractSubgraph(inputs, outputs);
}
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,56 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "graph_iterator_flatbuffer.hpp"
#include "input_model.hpp"
#include "openvino/frontend/extension/telemetry.hpp"
#include "openvino/frontend/tensorflow_lite/frontend.hpp"
#include "openvino/opsets/opset1.hpp"
#include "tensor_lite_place.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class InputModel : public ov::frontend::InputModel {
friend class ov::frontend::tensorflow_lite::FrontEnd;
class InputModelTFLiteImpl;
std::shared_ptr<InputModelTFLiteImpl> _impl;
std::vector<std::shared_ptr<ov::frontend::tensorflow::OpPlace>> get_op_places() const;
std::map<std::string, std::shared_ptr<ov::frontend::tensorflow_lite::TensorLitePlace>> get_tensor_places() const;
std::map<std::string, Output<Node>> get_tensor_values() const;
public:
explicit InputModel(const ov::frontend::tensorflow_lite::GraphIteratorFlatBuffer::Ptr& graph_iterator,
const std::shared_ptr<TelemetryExtension>& telemetry = {});
///// Searching for places /////
std::vector<ov::frontend::Place::Ptr> get_inputs() const override;
std::vector<ov::frontend::Place::Ptr> get_outputs() const override;
ov::frontend::Place::Ptr get_place_by_tensor_name(const std::string& tensorName) const override;
///// Naming and annotation /////
void set_name_for_tensor(const Place::Ptr& tensor, const std::string& new_name) override;
void add_name_for_tensor(const Place::Ptr& tensor, const std::string& new_name) override;
void set_name_for_operation(const Place::Ptr& operation, const std::string& new_name) override;
///// Setting / getting tensor properties /////
void set_partial_shape(const Place::Ptr& place, const ov::PartialShape& shape) override;
ov::PartialShape get_partial_shape(const Place::Ptr& place) const override;
void set_element_type(const Place::Ptr& place, const ov::element::Type& type) override;
ov::element::Type get_element_type(const Place::Ptr& place) const override;
void set_tensor_value(const Place::Ptr& place, const void* value) override;
///// Topology Editing /////
void override_all_outputs(const std::vector<ov::frontend::Place::Ptr>& outputs) override;
void override_all_inputs(const std::vector<ov::frontend::Place::Ptr>& inputs) override;
void extract_subgraph(const std::vector<ov::frontend::Place::Ptr>& inputs,
const std::vector<ov::frontend::Place::Ptr>& outputs) override;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector batch_matmul(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"adj_x", decoder->get_attribute(&tflite::BatchMatMulOptions::adj_x)},
{"adj_y", decoder->get_attribute(&tflite::BatchMatMulOptions::adj_y)},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_batch_mat_mul_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector cast(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"DstT", get_ov_type(decoder->get_attribute(&tflite::CastOptions::out_data_type))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_cast_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,26 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector concatenation(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
int64_t axis = static_cast<int64_t>(decoder->get_attribute(&tflite::ConcatenationOptions::axis));
auto concat = make_shared<opset10::Concat>(node.get_inputs(), axis);
concat->set_friendly_name(decoder->get_op_name());
return concat->outputs();
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,34 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector conv2d(const ov::frontend::tensorflow_lite::NodeContext& node) {
auto decoder = get_conv_decoder_map<tflite::Conv2DOptions>("Conv2D", node);
FRONT_END_GENERAL_CHECK(node.get_input_size() >= 2,
"Unexpected number of input in node of type=",
node.get_op_type(),
" name=",
node.get_name());
OutputVector output;
get_conv(output, node, decoder, &ov::frontend::tensorflow::op::translate_conv_2d_op);
get_bias(output, node, decoder);
get_activation(output, decoder);
output[0].get_node_shared_ptr()->set_friendly_name(node.get_name());
return output;
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector depth_to_space(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"block_size", static_cast<int64_t>(decoder->get_attribute(&tflite::DepthToSpaceOptions::block_size))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_depth_to_space_op, "DepthToSpace");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,34 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector depthwise_conv2d(const ov::frontend::tensorflow_lite::NodeContext& node) {
auto decoder = get_conv_decoder_map<tflite::DepthwiseConv2DOptions>("DepthwiseConv2dNative", node);
FRONT_END_GENERAL_CHECK(node.get_input_size() >= 2,
"Unexpected number of input in node of type=",
node.get_op_type(),
" name=",
node.get_name());
OutputVector output;
get_conv(output, node, decoder, &ov::frontend::tensorflow::op::translate_depthwise_conv_2d_native_op);
get_bias(output, node, decoder);
get_activation(output, decoder);
output[0].get_node_shared_ptr()->set_friendly_name(node.get_name());
return output;
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,39 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector fully_connected(const ov::frontend::tensorflow_lite::NodeContext& node) {
using FCOptions = tflite::FullyConnectedOptions;
const auto& decoder = get_decoder(node);
auto data = node.get_input(0);
auto weights = node.get_input(1);
if (decoder->get_attribute(&FCOptions::weights_format) != tflite::FullyConnectedOptionsWeightsFormat_DEFAULT) {
FRONT_END_NOT_IMPLEMENTED(
"FullyConnectedOptions::weights_format != FullyConnectedOptionsWeightsFormat_DEFAULT");
}
if (!decoder->get_attribute(&FCOptions::keep_num_dims)) {
// Everything is 2D now -- insert Reshape
// weights = Reshape;
}
auto output = std::make_shared<opset10::MatMul>(data, weights, false, true)->outputs();
auto activation_name =
EnumNameActivationFunctionType(decoder->get_attribute(&FCOptions::fused_activation_function));
get_activation(output, node, activation_name);
output[0].get_node_shared_ptr()->set_friendly_name(decoder->get_op_name());
return output;
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,30 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector gather(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
auto batch_dims = static_cast<int64_t>(decoder->get_attribute(&tflite::GatherOptions::batch_dims));
auto axis = opset10::Constant::create(element::i32, {}, {decoder->get_attribute(&tflite::GatherOptions::axis)});
auto input = node.get_input(0);
auto input_indices = node.get_input(1);
auto res = make_shared<opset10::Gather>(input, input_indices, axis, batch_dims);
res->set_friendly_name(node.get_name());
return res->outputs();
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector leaky_relu(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"alpha", decoder->get_attribute(&tflite::LeakyReluOptions::alpha)},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_leaky_relu_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector mirror_pad(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"mode", string(EnumNameMirrorPadMode(decoder->get_attribute(&tflite::MirrorPadOptions::mode)))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_mirror_pad_op, "MirrorPad");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector one_hot(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"axis", static_cast<int64_t>(decoder->get_attribute(&tflite::OneHotOptions::axis))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_one_hot_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,155 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "op_translation_utils.hpp"
#include <functional>
#include <map>
#include <string>
#include "openvino/core/node_vector.hpp"
#include "openvino/frontend/tensorflow_lite/node_context.hpp"
#include "openvino_conversions.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
void set_output_names(const ov::frontend::tensorflow_lite::NodeContext& node, OutputVector& outputs) {
const auto& decoder_with_name = std::dynamic_pointer_cast<DecoderFlatBuffer>(node.get_decoder());
FRONT_END_GENERAL_CHECK(decoder_with_name != nullptr,
"Unexpected decoder during operation translation. Expected DecoderFlatBuffer");
FRONT_END_GENERAL_CHECK(outputs.size() == decoder_with_name->get_output_size(),
"Unexpected decoder during operation translation. Expected DecoderFlatBuffer");
for (size_t i = 0; i < decoder_with_name->get_output_size(); ++i) {
outputs[i].set_names({decoder_with_name->get_output_tensor_name(i)});
}
}
void del_output_names(OutputVector& outputs) {
for (auto& output : outputs) {
output.set_names({});
}
}
void get_conv(ov::OutputVector& output,
const ov::frontend::NodeContext& node,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&)) {
ov::OutputVector inputs = {node.get_input(0),
ov::frontend::tensorflow::make_transpose(node.get_input(1), ov::AxisVector{1, 2, 3, 0})};
auto context = ov::frontend::tensorflow_lite::NodeContext(decoder, inputs);
output = converter(context);
del_output_names(output);
}
void get_pool(ov::OutputVector& output,
const ov::frontend::NodeContext& node,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&)) {
ov::OutputVector inputs = {node.get_input(0)};
auto context = ov::frontend::tensorflow_lite::NodeContext(decoder, inputs);
output = converter(context);
del_output_names(output);
}
void get_bias(ov::OutputVector& output,
const ov::frontend::NodeContext& node,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder) {
if (node.get_input_size() == 3) {
const OutputVector inputs_for_bias = {output[0], node.get_input(2)};
auto context_for_bias_add = ov::frontend::tensorflow_lite::NodeContext(decoder, inputs_for_bias);
// FIXME: dependence on layout?
output = ov::frontend::tensorflow::op::translate_binary_op<ov::opset10::Add>(context_for_bias_add);
del_output_names(output);
}
}
void get_activation(ov::OutputVector& output,
const ov::frontend::tensorflow_lite::NodeContext& node,
const std::string& activation) {
if (activation == "RELU") {
output = ov::frontend::tensorflow::op::translate_unary_op<opset10::Relu>(node);
} else if (activation == "RELU6") {
output = ov::frontend::tensorflow::op::translate_relu_6_op(node);
} else if (activation == "TANH") {
output = ov::frontend::tensorflow::op::translate_unary_op<opset10::Tanh>(node);
} else {
// TODO: Fused activation to support:
// RELU_N1_TO_1 = 2,
// SIGN_BIT = 5,
if (activation != "NONE") {
FRONT_END_THROW("Unknown Activation fused to " + node.get_decoder()->get_op_type() + ": " + activation);
}
}
del_output_names(output);
}
void get_activation(ov::OutputVector& output,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder) {
auto context_for_activation = ov::frontend::tensorflow_lite::NodeContext(decoder, output);
const auto activation = decoder->get_attribute("activation").as<std::string>();
get_activation(output, context_for_activation, activation);
}
std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap> get_pool_decoder_map(
const std::string& new_type_name,
const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = std::dynamic_pointer_cast<DecoderFlatBuffer>(node.get_decoder());
FRONT_END_GENERAL_CHECK(decoder != nullptr,
"Unexpected decoder during operation translation. Expected DecoderFlatBuffer");
const std::map<std::string, ov::Any> attrs{
{"strides",
std::vector<int64_t>{1,
decoder->get_attribute(&tflite::Pool2DOptions::stride_h),
decoder->get_attribute(&tflite::Pool2DOptions::stride_w),
1}},
{"padding", std::string(EnumNamePadding(decoder->get_attribute(&tflite::Pool2DOptions::padding)))},
{"ksize",
std::vector<int64_t>{1,
decoder->get_attribute(&tflite::Pool2DOptions::filter_height),
decoder->get_attribute(&tflite::Pool2DOptions::filter_width),
1}},
{"data_format", "NHWC"},
{"activation",
EnumNameActivationFunctionType(decoder->get_attribute(&tflite::Pool2DOptions::fused_activation_function))},
};
return std::make_shared<ov::frontend::tensorflow_lite::DecoderMap>(node.get_decoder(), attrs, new_type_name, true);
}
OutputVector attribute_helper(const ov::frontend::tensorflow_lite::NodeContext& node,
const std::map<std::string, ov::Any>& attrs,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&),
std::string new_op_type,
bool empty_name) {
const auto& original_decoder = std::dynamic_pointer_cast<DecoderFlatBuffer>(node.get_decoder());
FRONT_END_GENERAL_CHECK(original_decoder != nullptr,
"Unexpected decoder during operation translation. Expected DecoderFlatBuffer");
auto decoder = std::make_shared<ov::frontend::tensorflow_lite::DecoderMap>(
original_decoder,
attrs,
(new_op_type.empty() ? original_decoder->get_op_type() : new_op_type),
empty_name);
OutputVector inputs = node.get_inputs();
auto context = ov::frontend::tensorflow_lite::NodeContext(decoder, inputs);
auto outputs = converter(context);
del_output_names(outputs);
return outputs;
}
std::shared_ptr<DecoderFlatBuffer> get_decoder(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = std::dynamic_pointer_cast<DecoderFlatBuffer>(node.get_decoder());
FRONT_END_GENERAL_CHECK(decoder != nullptr,
"Unexpected decoder during operation translation. Expected DecoderFlatBuffer");
return decoder;
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,117 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <functional>
#include <map>
#include <string>
#include "common_op_table.hpp"
#include "decoder_map.hpp"
#include "openvino/core/node_vector.hpp"
#include "openvino/frontend/tensorflow_lite/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "openvino_conversions.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
std::shared_ptr<DecoderFlatBuffer> get_decoder(const ov::frontend::tensorflow_lite::NodeContext& node);
void set_output_names(const ov::frontend::tensorflow_lite::NodeContext& node, OutputVector& outputs);
void del_output_names(OutputVector& outputs);
// convolutions
template <class T>
std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap> get_conv_decoder_map(
const std::string& new_type_name,
const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
const std::map<std::string, ov::Any> attrs{
{"strides",
std::vector<int64_t>{1, decoder->get_attribute(&T::stride_h), decoder->get_attribute(&T::stride_w), 1}},
{"padding", std::string(EnumNamePadding(decoder->get_attribute(&T::padding)))},
{"dilations",
std::vector<int64_t>{1,
decoder->get_attribute(&T::dilation_h_factor),
decoder->get_attribute(&T::dilation_w_factor),
1}},
{"data_format", "NHWC"},
{"activation", EnumNameActivationFunctionType(decoder->get_attribute(&T::fused_activation_function))},
};
return std::make_shared<ov::frontend::tensorflow_lite::DecoderMap>(node.get_decoder(), attrs, new_type_name, true);
}
void get_conv(ov::OutputVector& output,
const ov::frontend::NodeContext& node,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&));
void get_bias(ov::OutputVector& output,
const ov::frontend::NodeContext& node,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder);
void get_activation(ov::OutputVector& output,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder);
void get_activation(ov::OutputVector& output,
const ov::frontend::tensorflow_lite::NodeContext& node,
const std::string& activation);
std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap> get_pool_decoder_map(
const std::string& new_type_name,
const ov::frontend::tensorflow_lite::NodeContext& node);
void get_pool(ov::OutputVector& output,
const ov::frontend::NodeContext& node,
const std::shared_ptr<ov::frontend::tensorflow_lite::DecoderMap>& decoder,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&));
template <typename OV_TYPE, typename TF_TYPE>
OutputVector translate_binary_op_with_activation(const ov::frontend::tensorflow_lite::NodeContext& node) {
auto output = ov::frontend::tensorflow::op::translate_binary_op<OV_TYPE>(node);
const auto& decoder = get_decoder(node);
get_activation(output,
node,
EnumNameActivationFunctionType(decoder->get_attribute(&TF_TYPE::fused_activation_function)));
return output;
}
template OutputVector translate_binary_op_with_activation<opset10::Add, tflite::AddOptions>(
const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_binary_op_with_activation<opset10::Subtract, tflite::SubOptions>(
const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_binary_op_with_activation<opset10::Multiply, tflite::MulOptions>(
const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_binary_op_with_activation<opset10::Divide, tflite::DivOptions>(
const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector attribute_helper(const ov::frontend::tensorflow_lite::NodeContext& node,
const std::map<std::string, ov::Any>& attrs,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&),
std::string new_op_type = "",
bool empty_name = false);
template <typename OV_TYPE>
OutputVector translate_reduce_op(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& original_decoder = std::dynamic_pointer_cast<DecoderFlatBuffer>(node.get_decoder());
FRONT_END_GENERAL_CHECK(original_decoder != nullptr,
"Unexpected decoder during operation translation. Expected DecoderFlatBuffer");
const std::map<std::string, ov::Any> attrs{
{"keep_dims", original_decoder->get_attribute(&tflite::ReducerOptions::keep_dims)}};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_direct_reduce_op<OV_TYPE>);
}
template OutputVector translate_reduce_op<opset8::ReduceMean>(const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_reduce_op<opset8::ReduceLogicalAnd>(
const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_reduce_op<opset8::ReduceLogicalOr>(
const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_reduce_op<opset8::ReduceMax>(const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_reduce_op<opset8::ReduceMin>(const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_reduce_op<opset8::ReduceProd>(const ov::frontend::tensorflow_lite::NodeContext& node);
template OutputVector translate_reduce_op<opset8::ReduceSum>(const ov::frontend::tensorflow_lite::NodeContext& node);
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector pack(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
const std::map<std::string, ov::Any> attrs{
{"axis", static_cast<int64_t>(decoder->get_attribute(&tflite::PackOptions::axis))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_pack_op, "Pack");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,43 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector pooling(const ov::frontend::tensorflow_lite::NodeContext& node,
const std::string& type_name,
ov::OutputVector (*converter)(const ov::frontend::NodeContext&)) {
auto decoder_for_tf_translator = get_pool_decoder_map(type_name, node);
FRONT_END_GENERAL_CHECK(node.get_input_size() == 1,
"Unexpected number of input in node of type=",
node.get_op_type(),
" name=",
node.get_name());
OutputVector output;
get_pool(output, node, decoder_for_tf_translator, converter);
get_activation(output, decoder_for_tf_translator);
output[0].get_node_shared_ptr()->set_friendly_name(node.get_name());
return output;
}
OutputVector max_pool_2d(const ov::frontend::tensorflow_lite::NodeContext& node) {
return pooling(node, "MaxPool", &ov::frontend::tensorflow::op::translate_max_pool_op);
}
OutputVector avg_pool_2d(const ov::frontend::tensorflow_lite::NodeContext& node) {
return pooling(node, "AvgPool", &ov::frontend::tensorflow::op::translate_avg_pool_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,26 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector range(const ov::frontend::tensorflow_lite::NodeContext& node) {
std::map<std::string, ov::Any> attrs{
{"Tidx", node.get_input(0).get_element_type()},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_range_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,41 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector reshape(const ov::frontend::tensorflow_lite::NodeContext& node) {
size_t input_size = node.get_input_size();
FRONT_END_GENERAL_CHECK(input_size == 1 || input_size == 2,
"Unexpected number of inputs -- ",
input_size,
", for node ",
node.get_op_type());
Output<Node> shape;
if (input_size == 1) {
const auto& decoder = get_decoder(node);
auto reshape_new_shape = decoder->get_attribute(&tflite::ReshapeOptions::new_shape);
const auto new_shape = std::vector<int64_t>(reshape_new_shape->begin(), reshape_new_shape->end());
shape = opset10::Constant::create(element::i64, ov::Shape{new_shape.size()}, new_shape);
} else {
shape = node.get_input(1);
}
auto reshape = std::make_shared<opset10::Reshape>(node.get_input(0), shape, false);
reshape->set_friendly_name(node.get_name());
return reshape->outputs();
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,38 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
using namespace ov::frontend::tensorflow::op;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector resize_bilinear(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
const std::map<std::string, ov::Any> attrs{
{"align_corners", decoder->get_attribute(&tflite::ResizeBilinearOptions::align_corners)},
{"half_pixel_centers", decoder->get_attribute(&tflite::ResizeBilinearOptions::half_pixel_centers)},
};
return attribute_helper(node, attrs, translate_interpolate_op, "ResizeBilinear");
}
OutputVector resize_nearest_neightbor(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
const std::map<std::string, ov::Any> attrs{
{"align_corners", decoder->get_attribute(&tflite::ResizeNearestNeighborOptions::align_corners)},
{"half_pixel_centers", false},
};
return attribute_helper(node, attrs, translate_interpolate_op, "ResizeNearestNeighbor");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,29 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
using namespace ov::frontend::tensorflow::op;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector reverse_sequence(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"seq_dim", static_cast<int64_t>(decoder->get_attribute(&tflite::ReverseSequenceOptions::seq_dim))},
{"batch_dim", static_cast<int64_t>(decoder->get_attribute(&tflite::ReverseSequenceOptions::batch_dim))},
};
return attribute_helper(node, attrs, translate_reverse_sequence_op, "ReverseSequence");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector shape(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"out_type", get_ov_type(decoder->get_attribute(&tflite::ShapeOptions::out_type))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_shape_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,33 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector softmax(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
auto beta = decoder->get_attribute(&tflite::SoftmaxOptions::beta);
Output<Node> output = node.get_input(0);
if (beta != 1.) {
auto beta_const = opset10::Constant::create(element::f32, Shape{}, vector<float>{beta});
auto mul_data = make_shared<opset10::ConvertLike>(beta_const, output);
output = make_shared<opset10::Multiply>(output, mul_data);
}
output = make_shared<opset8::Softmax>(output, -1);
output.get_node()->set_friendly_name(decoder->get_op_name());
return {output};
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector space_to_depth(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"seed", static_cast<int64_t>(decoder->get_attribute(&tflite::SpaceToDepthOptions::block_size))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_space_to_depth_op, "SpaceToDepth");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,26 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector split(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"num_split", static_cast<int64_t>(decoder->get_attribute(&tflite::SplitOptions::num_splits))}};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_split_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,26 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector squeeze(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
auto squeeze_dims = decoder->get_attribute(&tflite::SqueezeOptions::squeeze_dims);
std::vector<int64_t> axes{squeeze_dims->begin(), squeeze_dims->end()};
return attribute_helper(node, {{"axis", axes}}, ov::frontend::tensorflow::op::translate_squeeze_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,32 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector strided_slice(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"begin_mask", static_cast<int64_t>(decoder->get_attribute(&tflite::StridedSliceOptions::begin_mask))},
{"end_mask", static_cast<int64_t>(decoder->get_attribute(&tflite::StridedSliceOptions::end_mask))},
{"new_axis_mask", static_cast<int64_t>(decoder->get_attribute(&tflite::StridedSliceOptions::new_axis_mask))},
{"ellipsis_mask", static_cast<int64_t>(decoder->get_attribute(&tflite::StridedSliceOptions::ellipsis_mask))},
{"shrink_axis_mask",
static_cast<int64_t>(decoder->get_attribute(&tflite::StridedSliceOptions::shrink_axis_mask))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_strided_slice_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector unique(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"out_idx", get_ov_type(decoder->get_attribute(&tflite::UniqueOptions::idx_out_type))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_unique_op, "Unique");
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,28 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "common_op_table.hpp"
#include "op_translation_utils.hpp"
#include "utils.hpp"
using namespace std;
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
OutputVector unpack(const ov::frontend::tensorflow_lite::NodeContext& node) {
const auto& decoder = get_decoder(node);
std::map<std::string, ov::Any> attrs{
{"axis", static_cast<int64_t>(decoder->get_attribute(&tflite::UnpackOptions::axis))},
{"num", static_cast<int64_t>(decoder->get_attribute(&tflite::UnpackOptions::num))},
};
return attribute_helper(node, attrs, ov::frontend::tensorflow::op::translate_unpack_op);
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,194 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "op_table.hpp"
#include "decoder_map.hpp"
#include "openvino/opsets/opset10.hpp"
using namespace std;
using namespace ov;
#define OP_CONVERT_TYPE_RENAME(func, name) \
[](const ov::frontend::tensorflow_lite::NodeContext& node) -> OutputVector { \
auto decoder = make_shared<DecoderMap>(node.get_decoder(), std::map<std::string, ov::Any>{}, name, false); \
auto inputs = node.get_inputs(); \
auto context = frontend::tensorflow_lite::NodeContext(decoder, inputs); \
return func(context); \
}
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
std::map<std::string, CreatorFunction> get_supported_ops() {
return {
{"ABS", ov::frontend::tensorflow::op::translate_unary_op<opset8::Abs>},
{"ADD", translate_binary_op_with_activation<opset10::Add, tflite::AddOptions>},
{"ADD_N", ov::frontend::tensorflow::op::translate_add_n_op},
// ARG_MAX
// ARG_MIN
// ASSIGN_VARIABLE
// ATAN2
{"AVERAGE_POOL_2D", avg_pool_2d},
{"BATCH_MATMUL", batch_matmul},
{"BATCH_TO_SPACE_ND",
OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_batch_to_space_nd_op, "BatchToSpaceND")},
// BIDIRECTIONAL_SEQUENCE_LSTM
// BIDIRECTIONAL_SEQUENCE_RNN
{"BROADCAST_ARGS",
OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_broadcast_args_op, "BroadcastArgs")},
{"BROADCAST_TO",
OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_broadcast_to_op, "BroadcastTo")},
// BUCKETIZE
// CALL
// CALL_ONCE
{"CAST", cast},
{"CEIL", ov::frontend::tensorflow::op::translate_unary_op<opset8::Ceiling>},
// COMPLEX_ABS
// CONCAT_EMBEDDINGS
{"CONCATENATION", concatenation},
{"CONV_2D", conv2d},
// CONV_3D
// CONV_3D_TRANSPOSE
{"COS", ov::frontend::tensorflow::op::translate_unary_op<opset8::Cos>},
// CUMSUM
// CUSTOM
// DELEGATE
// DENSIFY
{"DEPTH_TO_SPACE", depth_to_space},
{"DEPTHWISE_CONV_2D", depthwise_conv2d},
// DEQUANTIZE
{"DIV", translate_binary_op_with_activation<opset10::Divide, tflite::DivOptions>},
// DYNAMIC_UPDATE_SLICE
{"ELU", ov::frontend::tensorflow::op::translate_elu_op},
// EMBEDDING_LOOKUP
// EMBEDDING_LOOKUP_SPARSE
{"EQUAL", ov::frontend::tensorflow::op::translate_binary_op<opset8::Equal>},
{"EXP", ov::frontend::tensorflow::op::translate_unary_op<opset8::Exp>},
{"EXPAND_DIMS", ov::frontend::tensorflow::op::translate_expand_dims_op},
// FAKE_QUANT
{"FILL", ov::frontend::tensorflow::op::translate_fill_op},
{"FLOOR", ov::frontend::tensorflow::op::translate_unary_op<opset8::Floor>},
{"FLOOR_DIV", ov::frontend::tensorflow::op::translate_floor_div_op},
{"FLOOR_MOD", ov::frontend::tensorflow::op::translate_binary_op<opset8::FloorMod>},
{"FULLY_CONNECTED", fully_connected},
{"GATHER", gather},
{"GATHER_ND", ov::frontend::tensorflow::op::translate_gather_nd_op},
// GELU
{"GREATER", ov::frontend::tensorflow::op::translate_binary_op<opset8::Greater>},
{"GREATER_EQUAL", ov::frontend::tensorflow::op::translate_binary_op<opset8::GreaterEqual>},
{"HARD_SWISH", ov::frontend::tensorflow::op::translate_unary_op<opset8::HSwish>},
// HASHTABLE
// HASHTABLE_FIND
// HASHTABLE_IMPORT
// HASHTABLE_LOOKUP
// HASHTABLE_SIZE
// IF
// IMAG
// L2_NORMALIZATION
// L2_POOL_2D
{"LEAKY_RELU", leaky_relu},
{"LESS", ov::frontend::tensorflow::op::translate_binary_op<opset8::Less>},
{"LESS_EQUAL", ov::frontend::tensorflow::op::translate_binary_op<opset8::LessEqual>},
// LOCAL_RESPONSE_NORMALIZATION
{"LOG", ov::frontend::tensorflow::op::translate_unary_op<opset8::Log>},
{"LOG_SOFTMAX", ov::frontend::tensorflow::op::translate_log_softmax_op},
{"LOGICAL_AND", ov::frontend::tensorflow::op::translate_binary_op<opset8::LogicalAnd>},
{"LOGICAL_NOT", ov::frontend::tensorflow::op::translate_unary_op<opset8::LogicalNot>},
{"LOGICAL_OR", ov::frontend::tensorflow::op::translate_binary_op<opset8::LogicalOr>},
{"LOGISTIC", ov::frontend::tensorflow::op::translate_unary_op<opset10::Sigmoid>},
// LSH_PROJECTION
// LSTM
{"MATRIX_DIAG", ov::frontend::tensorflow::op::translate_matrix_diag_op},
// MATRIX_SET_DIAG
{"MAX_POOL_2D", max_pool_2d},
{"MAXIMUM", ov::frontend::tensorflow::op::translate_binary_op<opset8::Maximum>},
{"MEAN", translate_reduce_op<opset8::ReduceMean>},
{"MINIMUM", ov::frontend::tensorflow::op::translate_binary_op<opset8::Minimum>},
{"MIRROR_PAD", mirror_pad},
{"MUL", translate_binary_op_with_activation<opset10::Multiply, tflite::MulOptions>},
// MULTINOMIAL
{"NEG", ov::frontend::tensorflow::op::translate_unary_op<opset8::Negative>},
// NON_MAX_SUPPRESSION_V4
// NON_MAX_SUPPRESSION_V5
{"NOT_EQUAL", ov::frontend::tensorflow::op::translate_binary_op<opset8::NotEqual>},
{"ONE_HOT", one_hot},
{"PACK", pack},
{"PAD", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_pad_op, "Pad")},
{"PADV2", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_padv2_op, "PadV2")},
{"POW", ov::frontend::tensorflow::op::translate_binary_op<opset8::Power>},
{"PRELU", ov::frontend::tensorflow::op::translate_binary_op<opset10::PRelu>},
// QUANTIZE
// RANDOM_STANDARD_NORMAL
// RANDOM_UNIFORM
{"RANGE", range},
{"RANK", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_rank_op, "Rank")},
// READ_VARIABLE
// REAL
{"REDUCE_ALL", translate_reduce_op<opset8::ReduceLogicalAnd>},
{"REDUCE_ANY", translate_reduce_op<opset8::ReduceLogicalOr>},
{"REDUCE_MAX", translate_reduce_op<opset8::ReduceMax>},
{"REDUCE_MIN", translate_reduce_op<opset8::ReduceMin>},
{"REDUCE_PROD", translate_reduce_op<opset8::ReduceProd>},
{"RELU", ov::frontend::tensorflow::op::translate_unary_op<opset10::Relu>},
// RELU_0_TO_1
// RELU_N1_TO_1
{"RELU6", ov::frontend::tensorflow::op::translate_relu_6_op},
{"RESHAPE", reshape},
{"RESIZE_BILINEAR", resize_bilinear},
{"RESIZE_NEAREST_NEIGHBOR", resize_nearest_neightbor},
{"REVERSE_SEQUENCE", reverse_sequence},
{"REVERSE_V2", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_reverse_v2_op, "ReverseV2")},
// RFFT2D
// RNN
{"ROUND", ov::frontend::tensorflow::op::translate_round_op},
{"RSQRT", ov::frontend::tensorflow::op::translate_rsqrt_op},
{"SCATTER_ND", ov::frontend::tensorflow::op::translate_scatter_nd_op},
{"SEGMENT_SUM", ov::frontend::tensorflow::op::translate_segment_sum_op},
{"SELECT", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_select_op, "Select")},
{"SELECT_V2", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_select_v2_op, "SelectV2")},
{"SHAPE", shape},
{"SIGN", ov::frontend::tensorflow::op::translate_unary_op<opset8::Sign>},
{"SIN", ov::frontend::tensorflow::op::translate_unary_op<opset8::Sin>},
// SKIP_GRAM
{"SLICE", ov::frontend::tensorflow::op::translate_slice_op},
{"SOFTMAX", softmax},
{"SPACE_TO_BATCH_ND",
OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_space_to_batch_nd_op, "SpaceToBatchND")},
{"SPACE_TO_DEPTH", space_to_depth},
// SPARSE_TO_DENSE
{"SPLIT", split},
{"SPLIT_V", ov::frontend::tensorflow::op::translate_split_v_op},
{"SQRT", ov::frontend::tensorflow::op::translate_sqrt_op},
{"SQUARE", ov::frontend::tensorflow::op::translate_square_op},
{"SQUARED_DIFFERENCE", ov::frontend::tensorflow::op::translate_binary_op<opset8::SquaredDifference>},
{"SQUEEZE", squeeze},
{"STRIDED_SLICE", strided_slice},
{"SUB", translate_binary_op_with_activation<opset10::Subtract, tflite::SubOptions>},
{"SUM", translate_reduce_op<opset8::ReduceSum>},
// SVDF
{"TANH", ov::frontend::tensorflow::op::translate_unary_op<opset8::Tanh>},
{"TILE", ov::frontend::tensorflow::op::translate_tile_op},
{"TOPK_V2", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_top_k_v2_op, "TopKV2")},
{"TRANSPOSE", ov::frontend::tensorflow::op::translate_transpose_op},
// TRANSPOSE_CONV
// UNIDIRECTIONAL_SEQUENCE_LSTM
// UNIDIRECTIONAL_SEQUENCE_RNN
{"UNIQUE", unique},
{"UNPACK", unpack},
// UNSORTED_SEGMENT_MAX
// UNSORTED_SEGMENT_MIN
// UNSORTED_SEGMENT_PROD
// UNSORTED_SEGMENT_SUM
// VAR_HANDLE
{"WHERE", OP_CONVERT_TYPE_RENAME(ov::frontend::tensorflow::op::translate_where_op, "Where")},
// WHILE
{"ZEROS_LIKE", ov::frontend::tensorflow::op::translate_zeros_like_op},
};
}
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,63 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <functional>
#include <map>
#include <string>
#include "common_op_table.hpp"
#include "decoder_map.hpp"
#include "openvino/core/node_vector.hpp"
#include "openvino/frontend/tensorflow_lite/node_context.hpp"
#include "openvino/opsets/opset10.hpp"
#include "openvino_conversions.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
namespace op {
using CreatorFunction = std::function<OutputVector(const ov::frontend::tensorflow_lite::NodeContext&)>;
std::map<std::string, CreatorFunction> get_supported_ops();
OutputVector batch_matmul(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector cast(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector conv2d(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector depthwise_conv2d(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector fully_connected(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector max_pool_2d(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector avg_pool_2d(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector concatenation(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector reshape(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector pack(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector softmax(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector resize_nearest_neightbor(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector resize_bilinear(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector squeeze(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector split(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector shape(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector range(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector strided_slice(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector gather(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector space_to_depth(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector depth_to_space(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector leaky_relu(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector mirror_pad(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector one_hot(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector reverse_sequence(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector unique(const ov::frontend::tensorflow_lite::NodeContext& node);
OutputVector unpack(const ov::frontend::tensorflow_lite::NodeContext& node);
template <typename OV_TYPE, typename TF_TYPE>
OutputVector translate_binary_op_with_activation(const ov::frontend::tensorflow_lite::NodeContext& node);
template <typename OV_TYPE>
OutputVector translate_reduce_op(const ov::frontend::tensorflow_lite::NodeContext& node);
} // namespace op
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,69 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <functional>
#include <map>
#include <string>
#include "common_op_table.hpp"
#include "decoder_map.hpp"
#include "openvino/core/node_vector.hpp"
#include "openvino/opsets/opset10.hpp"
#include "openvino_conversions.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class QuantizationInfo : public ov::RuntimeAttribute {
public:
OPENVINO_RTTI("QuantizationInfo");
QuantizationInfo() = default;
explicit QuantizationInfo(const std::vector<float>& scale,
const std::vector<int64_t>& zero_point,
const int64_t& axis)
: m_scale(scale),
m_zero_point(zero_point),
m_axis(axis) {}
bool is_copyable() const override {
return false;
}
const std::vector<float>& get_scale() const {
return m_scale;
}
void set_scale(const std::vector<float>& scale) {
m_scale = scale;
}
const std::vector<int64_t>& get_zero_point() const {
return m_zero_point;
}
void set_zero_point(const std::vector<int64_t>& zero_point) {
m_zero_point = zero_point;
}
const int64_t& get_axis() const {
return m_axis;
}
void set_axis(const int64_t& axis) {
m_axis = axis;
}
bool is_disabled() const {
return m_disabled;
}
void disable() {
m_disabled = true;
}
private:
std::vector<float> m_scale;
std::vector<int64_t> m_zero_point;
int64_t m_axis{};
bool m_disabled = false;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "tensor_lite_place.hpp"
#include "quantization_info.hpp"
void ov::frontend::tensorflow_lite::TensorLitePlace::translate(ov::Output<ov::Node>& output,
bool convert_tensor_attrs_to_nodes) {
output.set_names({*get_names().begin()});
output.get_rt_info()[QuantizationInfo::get_type_info_static()] = m_quantization;
if (convert_tensor_attrs_to_nodes)
apply_quantization(output);
}

View File

@@ -0,0 +1,70 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <utility>
#include "openvino/frontend/frontend.hpp"
#include "openvino/frontend/tensorflow_lite/visibility.hpp"
#include "place.hpp"
#include "quantization_info.hpp"
#include "utils.hpp"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class TensorLitePlace : public ov::frontend::tensorflow::TensorPlace {
public:
TensorLitePlace(const ov::frontend::InputModel& input_model,
const ov::PartialShape& pshape,
ov::element::Type type,
const std::vector<std::string>& names,
std::shared_ptr<ov::frontend::tensorflow_lite::QuantizationInfo> quantization,
int64_t input_idx,
int64_t output_idx,
const void* data)
: ov::frontend::tensorflow::TensorPlace(input_model, pshape, type, names),
m_quantization(quantization),
m_input_idx(input_idx),
m_output_idx(output_idx),
m_data(data){};
void translate(ov::Output<ov::Node>& output, bool convert_tensor_attrs_to_nodes = false);
bool is_input() const override {
return m_input_idx >= 0;
}
size_t get_input_index() const {
FRONT_END_GENERAL_CHECK(is_input(), "This is not input TensorPlace. Can not deliver input index");
return static_cast<size_t>(m_input_idx);
}
bool is_output() const override {
return m_output_idx >= 0;
}
size_t get_output_index() const {
FRONT_END_GENERAL_CHECK(is_output(), "This is not output TensorPlace. Can not deliver output index");
return static_cast<size_t>(m_output_idx);
}
void set_input_index(const int64_t& idx) {
m_input_idx = idx;
}
void set_output_index(const int64_t& idx) {
m_output_idx = idx;
}
const void* get_data() const {
return m_data;
}
protected:
std::shared_ptr<ov::frontend::tensorflow_lite::QuantizationInfo> m_quantization;
int64_t m_input_idx, m_output_idx;
const void* m_data;
};
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,20 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/frontend/manager.hpp"
#include "openvino/frontend/tensorflow_lite/frontend.hpp"
#include "openvino/frontend/tensorflow_lite/visibility.hpp"
TENSORFLOW_LITE_C_API ov::frontend::FrontEndVersion GetAPIVersion() {
return OV_FRONTEND_API_VERSION;
}
TENSORFLOW_LITE_C_API void* GetFrontEndData() {
auto res = new ov::frontend::FrontEndPluginInfo();
res->m_name = "tflite";
res->m_creator = []() {
return std::make_shared<ov::frontend::tensorflow_lite::FrontEnd>();
};
return res;
}

View File

@@ -0,0 +1,150 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "utils.hpp"
#include <openvino/opsets/opset10.hpp>
#include "schema_generated.h"
using namespace ov;
std::shared_ptr<ov::frontend::tensorflow_lite::QuantizationInfo> ov::frontend::tensorflow_lite::get_quantization(
const tflite::QuantizationParameters* tf_quantization) {
if (tf_quantization == NULL)
return {};
auto quantization = std::make_shared<ov::frontend::tensorflow_lite::QuantizationInfo>();
auto tf_zp = tf_quantization->zero_point();
auto tf_scale = tf_quantization->scale();
if (tf_zp != NULL)
quantization->set_zero_point({(*tf_zp).begin(), (*tf_zp).end()});
if (tf_scale != NULL)
quantization->set_scale({(*tf_scale).begin(), (*tf_scale).end()});
if (quantization->get_zero_point().empty() && quantization->get_scale().empty())
return {};
quantization->set_axis(tf_quantization->quantized_dimension());
return quantization;
}
namespace {
const std::map<tflite::TensorType, ov::element::Type>& TYPE_MAP() {
static const std::map<tflite::TensorType, ov::element::Type> type_map{
{tflite::TensorType_FLOAT32, element::f32},
{tflite::TensorType_FLOAT16, element::f16},
{tflite::TensorType_INT32, element::i32},
{tflite::TensorType_UINT8, element::u8},
{tflite::TensorType_INT64, element::i64},
{tflite::TensorType_BOOL, element::boolean},
{tflite::TensorType_INT16, element::i16},
{tflite::TensorType_INT8, element::i8},
{tflite::TensorType_FLOAT64, element::f64},
{tflite::TensorType_UINT64, element::u64},
{tflite::TensorType_UINT32, element::u32},
{tflite::TensorType_UINT16, element::u16},
{tflite::TensorType_INT4, element::i4},
// TODO: support the following types
// {TensorType_STRING, element::string},
// {TensorType_COMPLEX64, element::complex64},
// {TensorType_COMPLEX128, element::complex128},
// {TensorType_RESOURCE, element::resource},
// {TensorType_VARIANT, element::variant},
};
return type_map;
}
} // namespace
ov::element::Type ov::frontend::tensorflow_lite::get_ov_type(const tflite::TensorType& tf_type) {
const auto& mapping = TYPE_MAP();
if (mapping.find(tf_type) == mapping.end()) {
FRONT_END_THROW("Unexpected type");
}
return mapping.at(tf_type);
}
ov::PartialShape ov::frontend::tensorflow_lite::get_ov_shape(const flatbuffers::Vector<int32_t>* tf_shape) {
return ov::Shape{tf_shape->begin(), tf_shape->end()};
}
ov::Shape get_quant_shape(const Output<Node>& output,
const std::shared_ptr<ov::frontend::tensorflow_lite::QuantizationInfo>& quantization,
const size_t& size) {
auto shape = ov::Shape{};
if (size > 1) {
FRONT_END_GENERAL_CHECK(output.get_partial_shape().rank().is_static(),
"Per-Channel Quantization of tensor with dynamic rank");
auto rank = output.get_partial_shape().size();
shape = ov::Shape(rank, 1);
shape[quantization->get_axis()] = size;
}
return shape;
}
void ov::frontend::tensorflow_lite::apply_quantization(ov::Output<ov::Node>& output) {
auto rt_info = output.get_rt_info();
if (!rt_info.count(QuantizationInfo::get_type_info_static())) // no quantization
return;
auto quantization = rt_info[QuantizationInfo::get_type_info_static()].as<std::shared_ptr<QuantizationInfo>>();
if (!quantization || quantization->is_disabled())
return;
bool is_constant = ov::is_type<ov::opset10::Constant>(output.get_node_shared_ptr());
bool is_input = ov::is_type<ov::opset10::Parameter>(output.get_node_shared_ptr());
auto input_type = output.get_element_type();
ov::Output<ov::Node> input_low, input_high, output_low, output_high;
auto zp = quantization->get_zero_point();
auto scale = quantization->get_scale();
auto zp_shape = get_quant_shape(output, quantization, zp.size());
auto scale_shape = get_quant_shape(output, quantization, scale.size());
auto input_rank = output.get_partial_shape().rank();
FRONT_END_GENERAL_CHECK(input_rank.is_static(), "Quantization is no");
auto zp_node = ov::opset10::Constant::create(element::f32, zp_shape, zp);
auto scale_node = ov::opset10::Constant::create(element::f32, scale_shape, scale);
if (is_constant) {
output = std::make_shared<ov::opset10::Convert>(output, element::f32);
if (std::any_of(zp.begin(), zp.end(), [](const int64_t& i) {
return i != 0;
}))
output = std::make_shared<ov::opset10::Subtract>(output, zp_node);
output = std::make_shared<ov::opset10::Multiply>(output, scale_node);
return;
}
auto levels = 256;
if (is_input) {
FRONT_END_GENERAL_CHECK(input_type == element::u8 || input_type == element::i8,
"Inputs of type other than u8 is not yet supported");
if (input_type == element::u8) {
output = std::make_shared<ov::opset10::Convert>(output, element::f32);
input_low = ov::opset10::Constant::create(element::f32, {}, {0});
input_high = ov::opset10::Constant::create(element::f32, {}, {levels - 1});
} else if (input_type == element::i8) {
output = std::make_shared<ov::opset10::Convert>(output, element::f32);
input_low = ov::opset10::Constant::create(element::f32, {}, {-128});
input_high = ov::opset10::Constant::create(element::f32, {}, {127});
}
}
if (std::all_of(zp.begin(), zp.end(), [](const int64_t& i) {
return i == 0;
})) {
output_low = ov::opset10::Constant::create(element::f32, {}, {0});
} else {
output_low = std::make_shared<opset10::Multiply>(std::make_shared<opset10::Negative>(scale_node), zp_node);
}
output_high = std::make_shared<opset10::Multiply>(
scale_node,
std::make_shared<opset10::Subtract>(ov::opset10::Constant::create(element::f32, {}, {levels - 1}), zp_node));
if (!is_input) {
input_low = output_low;
input_high = output_high;
}
output = std::make_shared<opset10::FakeQuantize>(output, input_low, input_high, output_low, output_high, levels);
quantization->disable(); // we applied parameters -- disable them so that they won't apply twice
}

View File

@@ -0,0 +1,25 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "decoder_flatbuffer.h"
#include "place.hpp"
#include "quantization_info.hpp"
#include "schema_generated.h"
namespace ov {
namespace frontend {
namespace tensorflow_lite {
class TensorLitePlace;
class QuantizationInfo;
ov::element::Type get_ov_type(const tflite::TensorType& tf_type);
ov::PartialShape get_ov_shape(const flatbuffers::Vector<int32_t>* tf_shape);
std::shared_ptr<QuantizationInfo> get_quantization(const tflite::QuantizationParameters* tf_quantization);
void apply_quantization(ov::Output<ov::Node>& output);
} // namespace tensorflow_lite
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,75 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(TARGET_NAME "ov_tensorflow_lite_frontend_tests")
ov_add_test_target(
NAME ${TARGET_NAME}
ROOT ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDENCIES
tensorflow_lite_test_models
tensorflow_lite_fe_standalone_build_test
LINK_LIBRARIES
gtest_main_manifest
frontend_shared_test_classes
openvino_tensorflow_lite_frontend
ADD_CLANG_FORMAT
LABELS
OV
TF_FE
)
# Test model generating
ov_check_pip_packages(REQUIREMENTS_FILE "${CMAKE_CURRENT_SOURCE_DIR}/requirements.txt"
MESSAGE_MODE WARNING
WARNING_MESSAGE "TensorFlow Lite frontend unit tests will be skipped"
RESULT_VAR tensorflow_FOUND)
set(TEST_TENSORFLOW_LITE_MODELS_DIRNAME test_model_zoo/tensorflow_lite_test_models)
target_compile_definitions(${TARGET_NAME} PRIVATE -D TEST_TENSORFLOW_LITE_MODELS_DIRNAME=\"${TEST_TENSORFLOW_LITE_MODELS_DIRNAME}/\")
# If 'tensorflow' is not found, code will still be compiled
# but models will not be generated and tests will fail
# This is done this way for 'code style' and check cases - cmake shall pass, but CI machine doesn't need to have
# 'tensorflow' installed to check code style
if (tensorflow_FOUND)
set(TEST_TENSORFLOW_LITE_MODELS ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/${TEST_TENSORFLOW_LITE_MODELS_DIRNAME}/)
file(GLOB_RECURSE TENSORFLOW_GEN_SCRIPTS ${CMAKE_CURRENT_SOURCE_DIR}/test_models/gen_scripts/generate_*.py)
file(GLOB_RECURSE TENSORFLOW_ALL_SCRIPTS ${CMAKE_CURRENT_SOURCE_DIR}/*.py)
set(OUT_FILES "")
foreach(GEN_SCRIPT ${TENSORFLOW_GEN_SCRIPTS})
get_filename_component(FILE_WE ${GEN_SCRIPT} NAME_WE)
set(OUT_DONE_FILE ${TEST_TENSORFLOW_LITE_MODELS}/${FILE_WE}_done.txt)
set(OUT_FILES ${OUT_DONE_FILE} ${OUT_FILES})
add_custom_command(OUTPUT ${OUT_DONE_FILE}
COMMAND ${PYTHON_EXECUTABLE}
${CMAKE_CURRENT_SOURCE_DIR}/test_models/gen_wrapper.py
${GEN_SCRIPT}
${TEST_TENSORFLOW_LITE_MODELS}
${OUT_DONE_FILE}
JOB_POOL four_jobs
DEPENDS ${TENSORFLOW_ALL_SCRIPTS}
)
endforeach()
add_custom_target(tensorflow_lite_test_models DEPENDS ${OUT_FILES})
install(DIRECTORY ${TEST_TENSORFLOW_LITE_MODELS}
DESTINATION tests/${TEST_TENSORFLOW_LITE_MODELS_DIRNAME}
COMPONENT tests
EXCLUDE_FROM_ALL)
else()
# Produce warning message at build time as well
add_custom_command(OUTPUT unable_build_tensorflow_models.txt
COMMAND ${CMAKE_COMMAND}
-E cmake_echo_color --red "Warning: Unable to generate tensorflow lite test models. Running '${TARGET_NAME}' will likely fail"
)
add_custom_target(tensorflow_lite_test_models DEPENDS unable_build_tensorflow_models.txt)
endif()
get_target_property(TENSORFLOW_LITE_FRONTEND_SRC_DIR openvino_tensorflow_lite_frontend SOURCE_DIR)
add_subdirectory(standalone_build)
add_dependencies(${TARGET_NAME} tensorflow_lite_fe_standalone_build_test)

View File

@@ -0,0 +1,23 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "basic_api.hpp"
#include "tf_utils.hpp"
using namespace ngraph;
using namespace ov::frontend;
using TFLiteBasicTest = FrontEndBasicTest;
static const std::vector<std::string> models{
std::string("2in_2out/2in_2out.tflite"),
};
INSTANTIATE_TEST_SUITE_P(TFLiteBasicTest,
FrontEndBasicTest,
::testing::Combine(::testing::Values(TF_LITE_FE),
::testing::Values(std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME)),
::testing::ValuesIn(models)),
FrontEndBasicTest::getTestCaseName);

View File

@@ -0,0 +1,53 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "conversion_extension.hpp"
#include "openvino/frontend/extension/telemetry.hpp"
#include "openvino/frontend/tensorflow_lite/frontend.hpp"
#include "so_extension.hpp"
#include "tf_utils.hpp"
using namespace ov::frontend;
using TFLiteConversionExtensionTest = FrontEndConversionExtensionTest;
static const std::string translator_name = "LOGISTIC";
class TensorflowLiteFrontendWrapper : public ov::frontend::tensorflow_lite::FrontEnd {
void add_extension(const std::shared_ptr<ov::Extension>& extension) override {
ov::frontend::tensorflow_lite::FrontEnd::add_extension(extension);
if (auto conv_ext = std::dynamic_pointer_cast<ConversionExtension>(extension)) {
EXPECT_NE(std::find(m_conversion_extensions.begin(), m_conversion_extensions.end(), conv_ext),
m_conversion_extensions.end())
<< "ConversionExtension is not registered.";
EXPECT_NE(m_op_translators.find(conv_ext->get_op_type()), m_op_translators.end())
<< conv_ext->get_op_type() << " translator is not registered.";
} else if (auto telemetry = std::dynamic_pointer_cast<TelemetryExtension>(extension)) {
EXPECT_EQ(m_telemetry, telemetry) << "TelemetryExtension is not registered.";
} else if (auto transformation = std::dynamic_pointer_cast<DecoderTransformationExtension>(extension)) {
EXPECT_NE(std::find(m_transformation_extensions.begin(), m_transformation_extensions.end(), transformation),
m_transformation_extensions.end())
<< "DecoderTransformationExtension is not registered.";
} else if (auto so_ext = std::dynamic_pointer_cast<ov::detail::SOExtension>(extension)) {
EXPECT_NE(std::find(m_extensions.begin(), m_extensions.end(), so_ext), m_extensions.end())
<< "SOExtension is not registered.";
}
}
};
static ConversionExtensionFEParam getTestData() {
ConversionExtensionFEParam res;
res.m_frontEndName = TF_LITE_FE;
res.m_modelsPath = std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME);
res.m_modelName = "2in_2out/2in_2out.tflite";
res.m_translatorName = translator_name;
res.m_frontend = std::make_shared<TensorflowLiteFrontendWrapper>();
return res;
}
INSTANTIATE_TEST_SUITE_P(TFLiteConversionExtensionTest,
FrontEndConversionExtensionTest,
::testing::Values(getTestData()),
FrontEndConversionExtensionTest::getTestCaseName);

View File

@@ -0,0 +1,23 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "convert_model.hpp"
#include "tf_utils.hpp"
using namespace ngraph;
using namespace ov::frontend;
using TFLiteConvertModelTest = FrontEndConvertModelTest;
static const std::vector<std::string> models{
std::string("2in_2out/2in_2out.tflite"),
};
INSTANTIATE_TEST_SUITE_P(TFLiteConvertModelTest,
FrontEndConvertModelTest,
::testing::Combine(::testing::Values(TF_LITE_FE),
::testing::Values(std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME)),
::testing::ValuesIn(models)),
FrontEndConvertModelTest::getTestCaseName);

View File

@@ -0,0 +1,24 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "library_extension.hpp"
#include "tf_utils.hpp"
using namespace ov::frontend;
using TFLiteLibraryExtensionTest = FrontendLibraryExtensionTest;
static FrontendLibraryExtensionTestParams getTestData() {
FrontendLibraryExtensionTestParams params;
params.m_frontEndName = TF_LITE_FE;
params.m_modelsPath = std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME);
params.m_modelName = "2in_2out/2in_2out.tflite";
return params;
}
INSTANTIATE_TEST_SUITE_P(TFLiteLibraryExtensionTest,
FrontendLibraryExtensionTest,
::testing::Values(getTestData()),
FrontendLibraryExtensionTest::getTestCaseName);

View File

@@ -0,0 +1,141 @@
// Copyright (C) 2018-2023 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "op_extension.hpp"
#include "openvino/frontend/extension/op.hpp"
#include "openvino/frontend/tensorflow_lite/extension/op.hpp"
#include "so_extension.hpp"
#include "tf_utils.hpp"
using namespace ov::frontend;
using TFLiteOpExtensionTest = FrontEndOpExtensionTest;
class Relu1 : public Relu {
public:
OPENVINO_OP("CustomRelu_1");
OPENVINO_FRAMEWORK_MAP(tensorflow_lite)
};
class Relu2 : public Relu {
public:
OPENVINO_FRAMEWORK_MAP(tensorflow_lite, "CustomRelu_2")
};
class Relu3 : public Relu {
public:
OPENVINO_FRAMEWORK_MAP(tensorflow_lite,
"CustomRelu_3",
{{"ov_attribute_1", "fw_attribute_1"}, {"ov_attribute_2", "fw_attribute_2"}})
};
class Relu4 : public Relu {
public:
OPENVINO_FRAMEWORK_MAP(tensorflow_lite,
"CustomRelu_4",
{{"ov_attribute_1", "fw_attribute_1"}, {"ov_attribute_2", "fw_attribute_2"}},
{
{"ov_attribute_str", "string"},
{"ov_attribute_int", 4},
{"ov_attribute_bool", true},
{"ov_attribute_float", 4.f},
{"ov_attribute_vec_string", std::vector<std::string>{"str1", "str2", "str3"}},
{"ov_attribute_vec_int", std::vector<int>{1, 2, 3, 4, 5, 6, 7}},
{"ov_attribute_vec_bool", std::vector<bool>{true, false, true}},
{"ov_attribute_vec_float", std::vector<float>{1., 2., 3., 4., 5., 6., 7.}},
})
};
static OpExtensionFEParam getTestDataOpExtensionViaUserClass() {
OpExtensionFEParam res;
res.m_frontEndName = TF_LITE_FE;
res.m_modelsPath = std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME);
res.m_modelName = "2in_2out/2in_2out.tflite";
// use core OpExtension
res.m_extensions = std::vector<std::shared_ptr<ov::Extension>>{std::make_shared<ov::OpExtension<Relu1>>(),
std::make_shared<ov::OpExtension<Relu2>>(),
std::make_shared<ov::OpExtension<Relu3>>(),
std::make_shared<ov::OpExtension<Relu4>>()};
return res;
}
static OpExtensionFEParam getTestDataOpExtensionViaTFConstructor() {
OpExtensionFEParam res;
res.m_frontEndName = TF_LITE_FE;
res.m_modelsPath = std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME);
res.m_modelName = "2in_2out/2in_2out.pb";
// use ov::frontend::tensorflow OpExtension
res.m_extensions = std::vector<std::shared_ptr<ov::Extension>>{
std::make_shared<ov::frontend::tensorflow_lite::OpExtension<>>("CustomRelu_5"),
std::make_shared<ov::frontend::tensorflow_lite::OpExtension<>>("ov_CustomRelu_6", "fw_CustomRelu_6"),
std::make_shared<ov::frontend::tensorflow_lite::OpExtension<>>(
"ov_CustomRelu_7",
"fw_CustomRelu_7",
std::map<std::string, std::string>{{"ov_attribute_1", "fw_attribute_1"},
{"ov_attribute_2", "fw_attribute_2"}}),
std::make_shared<ov::frontend::tensorflow_lite::OpExtension<>>(
"ov_CustomRelu_8",
"fw_CustomRelu_8",
std::map<std::string, std::string>{{"ov_attribute_1", "fw_attribute_1"},
{"ov_attribute_2", "fw_attribute_2"}},
std::map<std::string, ov::Any>{
{"ov_attribute_str", "string"},
{"ov_attribute_int", 4},
{"ov_attribute_bool", true},
{"ov_attribute_float", 4.f},
{"ov_attribute_vec_string", std::vector<std::string>{"str1", "str2", "str3"}},
{"ov_attribute_vec_int", std::vector<int>{1, 2, 3, 4, 5, 6, 7}},
{"ov_attribute_vec_bool", std::vector<bool>{true, false, true}},
{"ov_attribute_vec_float", std::vector<float>{1., 2., 3., 4., 5., 6., 7.}},
})};
return res;
}
static OpExtensionFEParam getTestDataOpExtensionViaCommonConstructor() {
OpExtensionFEParam res;
res.m_frontEndName = TF_LITE_FE;
res.m_modelsPath = std::string(TEST_TENSORFLOW_LITE_MODELS_DIRNAME);
res.m_modelName = "2in_2out/2in_2out.tflite";
// use ov::frontend::OpExtension
res.m_extensions = std::vector<std::shared_ptr<ov::Extension>>{
std::make_shared<ov::frontend::OpExtension<>>("CustomRelu_9"),
std::make_shared<ov::frontend::OpExtension<>>("ov_CustomRelu_10", "fw_CustomRelu_10"),
std::make_shared<ov::frontend::OpExtension<>>(
"ov_CustomRelu_11",
"fw_CustomRelu_11",
std::map<std::string, std::string>{{"ov_attribute_1", "fw_attribute_1"},
{"ov_attribute_2", "fw_attribute_2"}}),
std::make_shared<ov::frontend::OpExtension<>>(
"ov_CustomRelu_12",
"fw_CustomRelu_12",
std::map<std::string, std::string>{{"ov_attribute_1", "fw_attribute_1"},
{"ov_attribute_2", "fw_attribute_2"}},
std::map<std::string, ov::Any>{
{"ov_attribute_str", "string"},
{"ov_attribute_int", 4},
{"ov_attribute_bool", true},
{"ov_attribute_float", 4.f},
{"ov_attribute_vec_string", std::vector<std::string>{"str1", "str2", "str3"}},
{"ov_attribute_vec_int", std::vector<int>{1, 2, 3, 4, 5, 6, 7}},
{"ov_attribute_vec_bool", std::vector<bool>{true, false, true}},
{"ov_attribute_vec_float", std::vector<float>{1., 2., 3., 4., 5., 6., 7.}},
})};
return res;
}
INSTANTIATE_TEST_SUITE_P(TFLiteOpExtensionTestViaUserClass,
FrontEndOpExtensionTest,
::testing::Values(getTestDataOpExtensionViaUserClass()),
FrontEndOpExtensionTest::getTestCaseName);
INSTANTIATE_TEST_SUITE_P(TFOpExtensionViaTFConstructor,
FrontEndOpExtensionTest,
::testing::Values(getTestDataOpExtensionViaTFConstructor()),
FrontEndOpExtensionTest::getTestCaseName);
INSTANTIATE_TEST_SUITE_P(TFOpExtensionViaCommonConstructor,
FrontEndOpExtensionTest,
::testing::Values(getTestDataOpExtensionViaCommonConstructor()),
FrontEndOpExtensionTest::getTestCaseName);

Some files were not shown because too many files have changed in this diff Show More