Introduce OV Extension base api (#7562)

* Moved so loader to utils

* Fixed extension tests

* Fixed tests and style

* Fixed style and tests

* Fixed ARM build

* Fix windows

* Fix ieFuncTests

* Wrap runtime exception

* Fixed tests

* Added separate new extension

* Fixed unicode extension loading

* Try to fix windows

* Fixed windows

* Fixed macro

* Fixed doc

* Fixed build

* Fixed comments

* Try to fix build

* Fixed build

* Fixed build

* Fixed shared_from_this

* Temp commit

* Changed extension

* Fixed merge conflicts

* Removed ngraph namespace from new extensions

* Fixed code style

* Added core add_extension methods and tests

* Added new tests

* Implement tile operation

* Enabled new extensions support

* Fixed build

* Fixed code style

* Try to fix windows

* Changed base extension class

* Removed redundant Ptr

* Fixed comments

* Fixed friend decl

* Fixed Windows export

* Fixed centos

* Added template add_extension method

* Move destructor to public

* Removed BaseExtension class

* Added variadic add_extension methods

* Fixed doc and typo

* Added BaseOpDestructor

* Allow to create new extension only for new operations

* Revert tests

* Fixed comments

* Fixed comments

* Fixed comment

* Added SO Extension wrapper
This commit is contained in:
Ilya Churaev
2021-11-01 10:36:30 +03:00
committed by GitHub
parent d8f9445a96
commit 4122ef50d6
52 changed files with 1136 additions and 205 deletions

View File

@@ -36,7 +36,7 @@ if(NOT ENABLE_DOCKER)
# install
install(TARGETS templatePlugin template_extension
install(TARGETS templatePlugin template_extension template_ov_extension
LIBRARY DESTINATION ${IE_CPACK_RUNTIME_PATH} COMPONENT tests EXCLUDE_FROM_ALL)
endif()

View File

@@ -20,7 +20,7 @@ To add your custom nGraph operation, create a new class that extends `ngraph::Op
Based on that, declaration of an operation class can look as follows:
@snippet template_extension/op.hpp op:header
@snippet template_extension/old/op.hpp op:header
### Class Fields
@@ -35,37 +35,37 @@ nGraph operation contains two constructors:
* Default constructor, which enables you to create an operation without attributes
* Constructor that creates and validates an operation with specified inputs and attributes
@snippet template_extension/op.cpp op:ctor
@snippet template_extension/old/op.cpp op:ctor
### `validate_and_infer_types()`
`ngraph::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of the operation.
@snippet template_extension/op.cpp op:validate
@snippet template_extension/old/op.cpp op:validate
### `clone_with_new_inputs()`
`ngraph::Node::clone_with_new_inputs` method creates a copy of the nGraph operation with new inputs.
@snippet template_extension/op.cpp op:copy
@snippet template_extension/old/op.cpp op:copy
### `visit_attributes()`
`ngraph::Node::visit_attributes` method enables you to visit all operation attributes.
@snippet template_extension/op.cpp op:visit_attributes
@snippet template_extension/old/op.cpp op:visit_attributes
### `evaluate()` and `has_evaluate()`
`ngraph::Node::evaluate` method enables you to apply constant folding to an operation.
@snippet template_extension/op.cpp op:evaluate
@snippet template_extension/old/op.cpp op:evaluate
## Register Custom Operations in Extension Class
To add custom operations to the [Extension](Extension.md) class, create an operation set with custom operations and implement the `InferenceEngine::IExtension::getOpSets` method:
@snippet template_extension/extension.cpp extension:getOpSets
@snippet template_extension/old/extension.cpp extension:getOpSets
This method returns a map of opsets that exist in the extension library.

View File

@@ -4,14 +4,14 @@ Inference Engine build infrastructure provides the Inference Engine Package for
To build an extension library, use the following CMake script:
@snippet template_extension/CMakeLists.txt cmake:extension
@snippet template_extension/old/CMakeLists.txt cmake:extension
This CMake script finds the Inference Engine and nGraph using the `find_package` CMake command.
To build an extension library, run the commands below:
```sh
$ cd template_extension
$ cd template_extension/old
$ mkdir build
$ cd build
$ cmake -DOpenVINO_DIR=[OpenVINO_DIR] ../

View File

@@ -7,7 +7,7 @@ The primary means of the performance of the CPU codepath in the Inference Engine
All custom kernels for the CPU plugin should be inherited from the InferenceEngine::ILayerExecImpl interface.
Based on that, declaration of a kernel implementation class can look as follows:
@snippet template_extension/cpu_kernel.hpp cpu_implementation:header
@snippet template_extension/old/cpu_kernel.hpp cpu_implementation:header
### Class Fields
@@ -22,25 +22,25 @@ The provided implementation has several fields:
An implementation constructor checks parameters of an nGraph operation, stores required attributes, and stores an error message in the case of an error.
@snippet template_extension/cpu_kernel.cpp cpu_implementation:ctor
@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:ctor
### `getSupportedConfigurations`
InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions.
@snippet template_extension/cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
### `init`
InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
@snippet template_extension/cpu_kernel.cpp cpu_implementation:init
@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:init
### `execute`
InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tenors as input/output blobs:
@snippet template_extension/cpu_kernel.cpp cpu_implementation:execute
@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:execute
## Register Implementation in `Extension` Class
@@ -52,13 +52,13 @@ To register custom kernel implementation in the [Extension](Extension.md) class,
InferenceEngine::IExtension::getImplTypes returns a vector of implementation types for an operation.
@snippet template_extension/extension.cpp extension:getImplTypes
@snippet template_extension/old/extension.cpp extension:getImplTypes
### <a name="getImplementation"><code>getImplementation</code></a>
InferenceEngine::IExtension::getImplementation returns the kernel implementation with a specified type for an operation.
@snippet template_extension/extension.cpp extension:getImplementation
@snippet template_extension/old/extension.cpp extension:getImplementation
## Load Extension with Executable Kernels to Plugin

View File

@@ -39,12 +39,12 @@ If you do not need an operator anymore, unregister it by calling `unregister_ope
The same principles apply when registering a custom ONNX operator based on custom nGraph operations.
This example shows how to register a custom ONNX operator based on `Operation` presented in [this tutorial](AddingNGraphOps.md), which is used in [TemplateExtension](Extension.md).
@snippet template_extension/extension.cpp extension:ctor
@snippet template_extension/old/extension.cpp extension:ctor
Here, the `register_operator` function is called in the constructor of Extension. The constructor makes sure that the function is called before InferenceEngine::Core::ReadNetwork, because InferenceEngine::Core::AddExtension must be called before a model with a custom operator is read.
The example below demonstrates how to unregister an operator from the destructor of Extension:
@snippet template_extension/extension.cpp extension:dtor
@snippet template_extension/old/extension.cpp extension:dtor
> **REQUIRED**: It is mandatory to unregister a custom ONNX operator if it is defined in a dynamic shared library.

View File

@@ -8,11 +8,11 @@ used as an example in this document and `FFT` used as a more complex example fro
Based on that, the declaration of an extension class can look as follows:
@snippet template_extension/extension.hpp extension:header
@snippet template_extension/old/extension.hpp extension:header
The extension library should contain and export the InferenceEngine::CreateExtension method, which creates an `Extension` class:
@snippet template_extension/extension.cpp extension:CreateExtension
@snippet template_extension/old/extension.cpp extension:CreateExtension
Also, an `Extension` object should implement the following methods:
@@ -20,7 +20,7 @@ Also, an `Extension` object should implement the following methods:
* InferenceEngine::IExtension::GetVersion returns information about the version of the library.
@snippet template_extension/extension.cpp extension:GetVersion
@snippet template_extension/old/extension.cpp extension:GetVersion
Implement the InferenceEngine::IExtension::getOpSets method if the extension contains custom layers.
Read [Custom nGraph Operation](AddingNGraphOps.md) for more information.

View File

@@ -2,36 +2,5 @@
# SPDX-License-Identifier: Apache-2.0
#
# [cmake:extension]
set(CMAKE_CXX_STANDARD 11)
set(TARGET_NAME "template_extension")
find_package(OpenVINO REQUIRED COMPONENTS Runtime OPTIONAL_COMPONENTS ONNX)
find_package(OpenCV QUIET COMPONENTS core)
set(SRC cpu_kernel.cpp extension.cpp op.cpp)
if(OpenCV_FOUND)
set(SRC ${SRC} fft_kernel.cpp fft_op.cpp)
endif()
add_library(${TARGET_NAME} MODULE ${SRC})
if(OpenCV_FOUND)
target_compile_definitions(${TARGET_NAME} PRIVATE OPENCV_IMPORT_ENABLED)
target_link_libraries(${TARGET_NAME} PRIVATE opencv_core)
endif()
target_compile_definitions(${TARGET_NAME} PRIVATE IMPLEMENT_INFERENCE_EXTENSION_API)
target_link_libraries(${TARGET_NAME} PRIVATE openvino::core openvino::runtime)
if(OpenVINO_Frontend_ONNX_FOUND)
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
target_compile_definitions(${TARGET_NAME} PRIVATE OPENVINO_ONNX_FRONTEND_ENABLED)
endif()
# [cmake:extension]
# Enable code style check
file(GLOB_RECURSE template_extension_src "${CMAKE_CURRENT_SOURCE_DIR}/*.cpp" "${CMAKE_CURRENT_SOURCE_DIR}/*.hpp")
add_clang_format_target(${TARGET_NAME}_clang FOR_SOURCES ${template_extension_src})
add_subdirectory(old)
add_subdirectory(new)

View File

@@ -0,0 +1,22 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# [cmake:extension]
set(CMAKE_CXX_STANDARD 11)
set(TARGET_NAME "template_ov_extension")
find_package(OpenVINO)
set(SRC identity.cpp ov_extension.cpp)
add_library(${TARGET_NAME} MODULE ${SRC})
target_compile_definitions(${TARGET_NAME} PRIVATE IMPLEMENT_OPENVINO_EXTENSION_API)
target_link_libraries(${TARGET_NAME} PRIVATE openvino::core)
# [cmake:extension]
# Enable code style check
file(GLOB_RECURSE template_extension_src "${CMAKE_CURRENT_SOURCE_DIR}/*.cpp" "${CMAKE_CURRENT_SOURCE_DIR}/*.hpp")
add_clang_format_target(${TARGET_NAME}_clang FOR_SOURCES ${template_extension_src})

View File

@@ -0,0 +1,48 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "identity.hpp"
using namespace TemplateExtension;
//! [op:ctor]
Identity::Identity(const ov::Output<ov::Node>& arg) : Op({arg}) {
constructor_validate_and_infer_types();
}
//! [op:ctor]
//! [op:validate]
void Identity::validate_and_infer_types() {
// Operation doesn't change shapes end element type
set_output_type(0, get_input_element_type(0), get_input_partial_shape(0));
}
//! [op:validate]
//! [op:copy]
std::shared_ptr<ov::Node> Identity::clone_with_new_inputs(const ov::OutputVector& new_args) const {
OPENVINO_ASSERT(new_args.size() != 1, "Incorrect number of new arguments");
return std::make_shared<Identity>(new_args.at(0));
}
//! [op:copy]
//! [op:visit_attributes]
bool Identity::visit_attributes(ov::AttributeVisitor& visitor) {
return true;
}
//! [op:visit_attributes]
//! [op:evaluate]
bool Identity::evaluate(ov::runtime::TensorVector& outputs, const ov::runtime::TensorVector& inputs) const {
auto in = inputs[0];
auto out = outputs[0];
out.set_shape(in.get_shape());
memcpy(out.data(), in.data(), in.get_size());
return true;
}
bool Identity::has_evaluate() const {
return true;
}
//! [op:evaluate]

View File

@@ -0,0 +1,27 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <openvino/op/op.hpp>
//! [op:header]
namespace TemplateExtension {
class Identity : public ov::op::Op {
public:
OPENVINO_OP("Identity");
Identity() = default;
Identity(const ov::Output<ov::Node>& arg);
void validate_and_infer_types() override;
std::shared_ptr<ov::Node> clone_with_new_inputs(const ov::OutputVector& new_args) const override;
bool visit_attributes(ov::AttributeVisitor& visitor) override;
bool evaluate(ov::runtime::TensorVector& outputs, const ov::runtime::TensorVector& inputs) const override;
bool has_evaluate() const override;
};
//! [op:header]
} // namespace TemplateExtension

View File

@@ -0,0 +1,11 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <openvino/core/extension.hpp>
#include <openvino/core/op_extension.hpp>
#include "identity.hpp"
OPENVINO_CREATE_EXTENSIONS(
std::vector<ov::Extension::Ptr>({std::make_shared<ov::OpExtension<TemplateExtension::Identity>>()}));

View File

@@ -0,0 +1,37 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# [cmake:extension]
set(CMAKE_CXX_STANDARD 11)
set(TARGET_NAME "template_extension")
find_package(OpenVINO REQUIRED COMPONENTS Runtime OPTIONAL_COMPONENTS ONNX)
find_package(OpenCV QUIET COMPONENTS core)
set(SRC cpu_kernel.cpp extension.cpp op.cpp)
if(OpenCV_FOUND)
set(SRC ${SRC} fft_kernel.cpp fft_op.cpp)
endif()
add_library(${TARGET_NAME} MODULE ${SRC})
if(OpenCV_FOUND)
target_compile_definitions(${TARGET_NAME} PRIVATE OPENCV_IMPORT_ENABLED)
target_link_libraries(${TARGET_NAME} PRIVATE opencv_core)
endif()
target_compile_definitions(${TARGET_NAME} PRIVATE IMPLEMENT_INFERENCE_EXTENSION_API)
target_link_libraries(${TARGET_NAME} PRIVATE openvino::core openvino::runtime)
if(OpenVINO_Frontend_ONNX_FOUND)
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
target_compile_definitions(${TARGET_NAME} PRIVATE OPENVINO_ONNX_FRONTEND_ENABLED)
endif()
# [cmake:extension]
# Enable code style check
file(GLOB_RECURSE template_extension_src "${CMAKE_CURRENT_SOURCE_DIR}/*.cpp" "${CMAKE_CURRENT_SOURCE_DIR}/*.hpp")
add_clang_format_target(${TARGET_NAME}_clang FOR_SOURCES ${template_extension_src})