Files
openvino/docs/IE_DG/Extensibility_DG/CPU_Kernel.md
Ilya Churaev 4122ef50d6 Introduce OV Extension base api (#7562)
* Moved so loader to utils

* Fixed extension tests

* Fixed tests and style

* Fixed style and tests

* Fixed ARM build

* Fix windows

* Fix ieFuncTests

* Wrap runtime exception

* Fixed tests

* Added separate new extension

* Fixed unicode extension loading

* Try to fix windows

* Fixed windows

* Fixed macro

* Fixed doc

* Fixed build

* Fixed comments

* Try to fix build

* Fixed build

* Fixed build

* Fixed shared_from_this

* Temp commit

* Changed extension

* Fixed merge conflicts

* Removed ngraph namespace from new extensions

* Fixed code style

* Added core add_extension methods and tests

* Added new tests

* Implement tile operation

* Enabled new extensions support

* Fixed build

* Fixed code style

* Try to fix windows

* Changed base extension class

* Removed redundant Ptr

* Fixed comments

* Fixed friend decl

* Fixed Windows export

* Fixed centos

* Added template add_extension method

* Move destructor to public

* Removed BaseExtension class

* Added variadic add_extension methods

* Fixed doc and typo

* Added BaseOpDestructor

* Allow to create new extension only for new operations

* Revert tests

* Fixed comments

* Fixed comments

* Fixed comment

* Added SO Extension wrapper
2021-11-01 10:36:30 +03:00

3.2 KiB

How to Implement Custom CPU Operations

The primary means of the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel.

Implementation Class

All custom kernels for the CPU plugin should be inherited from the InferenceEngine::ILayerExecImpl interface. Based on that, declaration of a kernel implementation class can look as follows:

@snippet template_extension/old/cpu_kernel.hpp cpu_implementation:header

Class Fields

The provided implementation has several fields:

  • add of the type int64_t is an attribute of a custom operation.
  • inShape of the type ngraph::Shape is an input shape.
  • outShape of the type ngraph::Shape is an output shape.
  • error of the type std::string is a field to handle errors from a constructor.

Constructor of Implementation

An implementation constructor checks parameters of an nGraph operation, stores required attributes, and stores an error message in the case of an error.

@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:ctor

getSupportedConfigurations

InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the Memory Primitives section for instructions.

@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:getSupportedConfigurations

init

InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the getSupportedConfigurations method and checks the parameters:

@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:init

execute

InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tenors as input/output blobs:

@snippet template_extension/old/cpu_kernel.cpp cpu_implementation:execute

Register Implementation in Extension Class

To register custom kernel implementation in the Extension class, implement the following methods:

getImplTypes

InferenceEngine::IExtension::getImplTypes returns a vector of implementation types for an operation.

@snippet template_extension/old/extension.cpp extension:getImplTypes

getImplementation

InferenceEngine::IExtension::getImplementation returns the kernel implementation with a specified type for an operation.

@snippet template_extension/old/extension.cpp extension:getImplementation

Load Extension with Executable Kernels to Plugin

Use the AddExtension method of the general plugin interface to load your primitives:

@snippet snippets/CPU_Kernel.cpp part0