2021-01-14 16:28:53 +03:00
# Custom Operations Guide {#openvino_docs_HOWTO_Custom_Layers_Guide}
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks including
TensorFlow*, Caffe*, MXNet*, Kaldi* and ONNX* file format. The list of supported operations (layers) is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Layers ](../MO_DG/prepare_model/Supported_Frameworks_Layers.md ).
Custom operations are operations that are not included in the list of known operations. If your model contains any
operation that is not in the list of known operations, the Model Optimizer is not able to generate an Intermediate
Representation (IR) for this model.
This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to
plug in your own implementation for existing or completely new operation.
> **NOTE:** *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used.
> Refer to the [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../MO_DG/IR_and_opsets.md)
> for more information on the topic.
## Terms Used in This Guide
- *Intermediate Representation (IR)* — Neural Network used only by the Inference Engine in OpenVINO abstracting the
different frameworks and describing the model topology, operations parameters and weights.
- *Operation* — The abstract concept of a math function that is selected for a specific purpose. Operations supported by
OpenVINO™ are listed in the supported operation set provided in the [Available Operations Sets ](../ops/opset.md ).
Examples of the operations are: [ReLU ](../ops/activation/ReLU_1.md ), [Convolution ](../ops/convolution/Convolution_1.md ),
[Add ](../ops/arithmetic/Add_1.md ), etc.
- *Kernel* — The implementation of a operation function in the OpenVINO™ plugin, in this case, the math programmed (in
C++ and OpenCL) to perform the operation for a target hardware (CPU or GPU).
- *Inference Engine Extension* — Device-specific module implementing custom operations (a set of kernels).
## Custom Operation Support Overview
There are three steps to support inference of a model with custom operation(s):
1. Add support for a custom operation in the [Model Optimizer ](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md ) so
the Model Optimizer can generate the IR with the operation.
2. Create an operation set and implement a custom nGraph operation in it as described in the
[Custom nGraph Operation ](../IE_DG/Extensibility_DG/AddingNGraphOps.md ).
3. Implement a customer operation in one of the [Inference Engine ](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md )
plugins to support inference of this operation using a particular target hardware (CPU, GPU or VPU).
To see the operations that are supported by each device plugin for the Inference Engine, refer to the
[Supported Devices ](../IE_DG/supported_plugins/Supported_Devices.md ).
> **NOTE:** If a device doesn't support a particular operation, an alternative to creating a new operation is to target
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be
> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to
> run on another device (e.g., CPU) that does support those operations.
### Custom Operation Support for the Model Optimizer
Model Optimizer model conversion pipeline is described in details in "Model Conversion Pipeline" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ).
It is recommended to read that article first for a better understanding of the following material.
Model Optimizer provides extensions mechanism to support new operations and implement custom model transformations to
generate optimized IR. This mechanism is described in the "Model Optimizer Extensions" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ).
Two types of the Model Optimizer extensions should be implemented to support custom operation at minimum:
1. Operation class for a new operation. This class stores information about the operation, its attributes, shape
inference function, attributes to be saved to an IR and some others internally used attributes. Refer to the
"Model Optimizer Operation" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ) for the
detailed instruction on how to implement it.
2. Operation attributes extractor. The extractor is responsible for parsing framework-specific representation of the
operation and uses corresponding operation class to update graph node attributes with necessary attributes of the
operation. Refer to the "Operation Extractor" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ) for the
detailed instruction on how to implement it.
> **NOTE:** In some cases you may need to implement some transformation to support the operation. This topic is covered
> in the "Graph Transformation Extensions" section on the
> [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
## Custom Operations Extensions for the Inference Engine
Inference Engine provides extensions mechanism to support new operations. This mechanism is described in the
[Inference Engine Extensibility Mechanism ](../IE_DG/Extensibility_DG/Intro.md ).
Each device plugin includes a library of optimized implementations to execute known operations which must be extended to
execute a custom operation. The custom operation extension is implemented according to the target device:
- Custom Operation CPU Extension
2021-03-12 15:25:43 +03:00
- A compiled shared library (`.so` or `.dll` ) needed by the CPU Plugin for executing the custom operation
2021-01-14 16:28:53 +03:00
on a CPU. Refer to the [How to Implement Custom CPU Operations ](../IE_DG/Extensibility_DG/CPU_Kernel.md ) for more
details.
- Custom Operation GPU Extension
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with a
operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the
[How to Implement Custom GPU Operations ](../IE_DG/Extensibility_DG/GPU_Kernel.md ) for more details.
- Custom Operation VPU Extension
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with a
operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to the
[How to Implement Custom Operations for VPU ](../IE_DG/Extensibility_DG/VPU_Kernel.md ) for more details.
Also, it is necessary to implement nGraph custom operation according to the
[Custom nGraph Operation ](../IE_DG/Extensibility_DG/AddingNGraphOps.md ) so the Inference Engine can read an IR with this
operation and correctly infer output tensors shape and type.
## Enabling Magnetic Resonance Image Reconstruction Model
This chapter provides a step-by-step instruction on how to enable the magnetic resonance image reconstruction model
implemented in the [repository ](https://github.com/rmsouza01/Hybrid-CS-Model-MRI/ ) using a custom operation on CPU. The
example is prepared for a model generated from the repository with hash `2ede2f96161ce70dcdc922371fe6b6b254aafcc8` .
### Download and Convert the Model to a Frozen TensorFlow\* Model Format
The original pre-trained model is provided in the hdf5 format which is not supported by OpenVINO directly and needs to
be converted to TensorFlow\* frozen model format first.
1. Download repository `https://github.com/rmsouza01/Hybrid-CS-Model-MRI` :< br
```bash
git clone https://github.com/rmsouza01/Hybrid-CS-Model-MRI
git checkout 2ede2f96161ce70dcdc922371fe6b6b254aafcc8
```
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
2. Convert pre-trained `.hdf5` to a frozen `.pb` graph using the following script (tested with TensorFlow==1.15.0 and
Keras==2.2.4) which should be executed from the root of the cloned repository:< br >
```py
import keras as K
import numpy as np
import Modules.frequency_spatial_network as fsnet
import tensorflow as tf
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
under_rate = '20'
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
stats = np.load("Data/stats_fs_unet_norm_" + under_rate + ".npy")
var_sampling_mask = np.load("Data/sampling_mask_" + under_rate + "perc.npy")
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
model = fsnet.wnet(stats[0], stats[1], stats[2], stats[3], kshape = (5,5), kshape2=(3,3))
model_name = "Models/wnet_" + under_rate + ".hdf5"
model.load_weights(model_name)
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
inp = np.random.standard_normal([1, 256, 256, 2]).astype(np.float32)
np.save('inp', inp)
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
sess = K.backend.get_session()
sess.as_default()
graph_def = sess.graph.as_graph_def()
graph_def = tf.graph_util.convert_variables_to_constants(sess, graph_def, ['conv2d_44/BiasAdd'])
with tf.gfile.FastGFile('wnet_20.pb', 'wb') as f:
f.write(graph_def.SerializeToString())
```
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
As a result the TensorFlow\* frozen model file "wnet_20.pb" is generated.
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
### Convert the Frozen TensorFlow\* Model to Intermediate Representation
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
Firstly, open the model in the TensorBoard or other TensorFlow* model visualization tool. The model supports dynamic
batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all
dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set
the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API
described in the [Using Shape Inference ](../IE_DG/ShapeInference.md ). Also refer to
[Converting a Model Using General Conversion Parameters ](../MO_DG/prepare_model/convert_model/Converting_Model_General.md )
and [Convert Your TensorFlow* Model ](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md )
for more details and command line parameters used for the model conversion.
2020-07-20 17:36:08 +03:00
```bash
2021-01-14 16:28:53 +03:00
./< MO_INSTALL_DIR > /mo.py --input_model < PATH_TO_MODEL > /wnet_20.pb -b 1
2020-07-20 17:36:08 +03:00
```
2021-01-14 16:28:53 +03:00
Model Optimizer produces the following error:
```bash
[ ERROR ] List of operations that cannot be converted to Inference Engine IR:
[ ERROR ] Complex (1)
[ ERROR ] lambda_2/Complex
[ ERROR ] IFFT2D (1)
[ ERROR ] lambda_2/IFFT2D
[ ERROR ] ComplexAbs (1)
[ ERROR ] lambda_2/Abs
[ ERROR ] Part of the nodes was not converted to IR. Stopped.
2020-07-20 17:36:08 +03:00
```
2021-01-14 16:28:53 +03:00
The error means that the Model Optimizer doesn't know how to handle 3 types of TensorFlow\* operations: "Complex",
"IFFT2D" and "ComplexAbs". In order to see more details about the conversion process run the model conversion with
additional parameter `--log_level DEBUG` . It is worth to mention the following lines from the detailed output:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
```bash
[ INFO ] Called "tf_native_tf_node_infer" for node "lambda_2/Complex"
[ < TIMESTAMP > ] [ DEBUG ] [ tf:228 ] Added placeholder with name 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder'
[ < TIMESTAMP > ] [ DEBUG ] [ tf:228 ] Added placeholder with name 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder'
[ < TIMESTAMP > ] [ DEBUG ] [ tf:241 ] update_input_in_pbs: replace input 'lambda_2/lambda_3/strided_slice' with input 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder'
[ < TIMESTAMP > ] [ DEBUG ] [ tf:249 ] Replacing input '0' of the node 'lambda_2/Complex' with placeholder 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder'
[ < TIMESTAMP > ] [ DEBUG ] [ tf:241 ] update_input_in_pbs: replace input 'lambda_2/lambda_4/strided_slice' with input 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder'
[ < TIMESTAMP > ] [ DEBUG ] [ tf:249 ] Replacing input '1' of the node 'lambda_2/Complex' with placeholder 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder'
[ < TIMESTAMP > ] [ DEBUG ] [ tf:148 ] Inferred shape of the output tensor with index '0' of the node 'lambda_2/Complex': '[ 1 256 256]'
[ < TIMESTAMP > ] [ DEBUG ] [ infer:145 ] Outputs:
[ < TIMESTAMP > ] [ DEBUG ] [ infer:32 ] output[0]: shape = [ 1 256 256], value = < UNKNOWN >
[ < TIMESTAMP > ] [ DEBUG ] [ infer:129 ] --------------------
[ < TIMESTAMP > ] [ DEBUG ] [ infer:130 ] Partial infer for lambda_2/IFFT2D
[ < TIMESTAMP > ] [ DEBUG ] [ infer:131 ] Op: IFFT2D
[ < TIMESTAMP > ] [ DEBUG ] [ infer:132 ] Inputs:
[ < TIMESTAMP > ] [ DEBUG ] [ infer:32 ] input[0]: shape = [ 1 256 256], value = < UNKNOWN >
```
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
This is a part of the log of the partial inference phase of the model conversion. See the "Partial Inference" section on
the [Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ) for
more information about this phase. Model Optimizer inferred output shape for the unknown operation of type "Complex"
using a "fallback" to TensorFlow\*. However, it is not enough to generate the IR because Model Optimizer doesn't know
which attributes of the operation should be saved to IR. So it is necessary to implement Model Optimizer extensions to
support these operations.
Before going into the extension development it is necessary to understand what these unsupported operations do according
to the TensorFlow\* framework specification.
* "Complex" - returns a tensor of complex type constructed from two real input tensors specifying real and imaginary
part of a complex number.
* "IFFT2D" - returns a tensor with inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of
an input.
* "ComplexAbs" - returns a tensor with absolute values of input tensor with complex numbers.
The part of the model with all three unsupported operations is depicted below:

This model uses complex numbers during the inference but Inference Engine does not support tensors of this data type. So
it is necessary to find a way how to avoid using tensors of such a type in the model. Fortunately, the complex tensor
appear as a result of "Complex" operation, is used as input in the "IFFT2D" operation then is passed to "ComplexAbs"
which produces real value tensor as output. So there are just 3 operations consuming/producing complex tensors in the
model.
Let's design an OpenVINO operation "FFT" which get a single real number tensor describing the complex number and
produces a single real number tensor describing output complex tensor. This way the fact that the model uses complex
numbers is hidden inside the "FFT" operation implementation. The operation gets a tensor of shape `[N, H, W, 2]` and
produces the output tensor with the same shape, where the innermost dimension contains pairs of real numbers describing
the complex number (its real and imaginary part). As we will see further this operation will allow us to support the
model. The implementation of the Model Optimizer operation should be saved to `mo_extensions/ops/FFT.py` file:
@snippet FFT.py fft:operation
The attribute `inverse` is a flag specifying type of the FFT to apply: forward or inverse.
See the "Model Optimizer Operation" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ) for the
detailed instruction on how to implement the operation.
Now it is necessary to implement extractor for the "IFFT2D" operation according to the
"Operation Extractor" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ). The
following snippet provides two extractors: one for "IFFT2D", another one for "FFT2D", however only on of them is used
in this example. The implementation should be saved to the file `mo_extensions/front/tf/FFT_ext.py` .
@snippet FFT_ext.py fft_ext:extractor
> **NOTE:** The graph is in inconsistent state after extracting node attributes because according to original operation
> "IFFT2D" semantic it should have an input consuming a tensor of complex numbers, but the extractor instantiated an
> operation "FFT" which expects a real tensor with specific layout. But the inconsistency will be resolved during
> applying front phase transformations discussed below.
The output shape of the operation "AddV2" from the picture above is `[N, H, W, 2]` . Where the innermost dimension
contains pairs of real numbers describing the complex number (its real and imaginary part). The following "StridedSlice"
operations split the input tensor into 2 parts to get a tensor of real and a tensor of imaginary parts which are then
consumed with the "Complex" operation to produce a tensor of complex numbers. These "StridedSlice" and "Complex"
operations can be removed so the "FFT" operation will get a real value tensor encoding complex numbers. To achieve this
we implement the front phase transformation which searches for a pattern of two "StridedSlice" operations with specific
attributes producing data to "Complex" operation and removes it from the graph. Refer to the
"Pattern-Defined Front Phase Transformations" section on the
[Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md ) for more
information on how this type of transformation works. The code snippet should be saved to the file
`mo_extensions/front/tf/Complex.py` .
@snippet Complex.py complex:transformation
> **NOTE:** The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but
> "FFT" produces real value tensor.
Now lets implement a transformation which replace a "ComplexAbs" operation with a sub-graph of primitive operations
which calculate the result using the following formulae: \f$module(z) = \sqrt{real(z) \cdot real(z) + imag(z) \cdot imag(z)}\f$.
Original "IFFT2D" operation produces tensor of complex values, but the "FFT" operation produces a real value tensor with
the same format and shape as the input for the operation. So the input shape for the "ComplexAbs" will be `[N, H, W, 2]`
with the innermost dimension containing tuple with real and imaginary part of a complex number. In order to calculate
absolute values for the complex tensor we do the following:
1. Raise all elements in the power of 2.
2. Calculate a reduced sum over the innermost dimension.
3. Calculate a square root.
The implementation should be saved to the file `mo_extensions/front/tf/ComplexAbs.py` and provided below:
@snippet ComplexAbs.py complex_abs:transformation
Now it is possible to convert the model using the following command line:
```bash
./< MO_INSTALL_DIR > /mo.py --input_model < PATH_TO_MODEL > /wnet_20.pb -b 1 --extensions mo_extensions/
```
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
The sub-graph corresponding to the originally non-supported one is depicted on the image below:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00

2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
> **NOTE:** Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with
> the value 2 moved to another position.
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
### Inference Engine Extension Implementation
Now it is necessary to implement the extension for the CPU plugin with operation "FFT" introduced previously. The code
below is based on the template extension described on the
[Inference Engine Extensibility Mechanism ](../IE_DG/Extensibility_DG/Intro.md ).
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
#### CMake Build File
The first step is to create a CMake configuration file which builds the extension. The content of the "CMakeLists.txt"
file is the following:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
@snippet ../template_extension/CMakeLists.txt cmake:extension
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
The CPU FFT kernel implementation uses OpenCV to perform the FFT that is why the extension library is linked with
"opencv_core" which comes with the OpenVINO.
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
#### Custom nGraph Operation "FFT" Implementation
The next step is to create the nGraph operation FFT. The header file "fft_op.hpp" has the following content:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
@snippet ../template_extension/fft_op.hpp fft_op:header
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
The operation has just one boolean attribute `inverse` . Implementation of the necessary nGraph operation functions are
in the "fft_op.cpp" file with the following content:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
@snippet ../template_extension/fft_op.cpp fft_op:implementation
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
Refer to the [Custom nGraph Operation ](../IE_DG/Extensibility_DG/AddingNGraphOps.md ) for more details.
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
#### CPU FFT Kernel Implementation
The operation implementation for CPU plugin uses OpenCV to perform the FFT. The header file "fft_kernel.hpp" has the
following content:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
@snippet ../template_extension/fft_kernel.hpp fft_kernel:header
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
The "fft_kernel.cpp" with the implementation of the CPU has the following content:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
@snippet ../template_extension/fft_kernel.cpp fft_kernel:implementation
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
Refer to the [How to Implement Custom CPU Operations ](../IE_DG/Extensibility_DG/CPU_Kernel.md ) for more details.
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
#### Extension Library Implementation
The last step is to create an extension library "extension.cpp" and "extension.hpp" which will include the FFT
operation for the CPU plugin. The code of the library is described in the [Extension Library ](../IE_DG/Extensibility_DG/Extension.md ).
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
### Building and Running the Custom Extension
In order to build the extension run the following:< br >
```bash
mkdir build & & cd build
source /opt/intel/openvino/bin/setupvars.sh
cmake .. -DCMAKE_BUILD_TYPE=Release
make --jobs=$(nproc)
```
2020-07-20 17:36:08 +03:00
2021-03-12 15:25:43 +03:00
The result of this command is a compiled shared library (`.so` or `.dll` ). It should be loaded in the
2021-01-14 16:28:53 +03:00
application using `Core` class instance method `AddExtension` like this
2021-03-05 12:08:01 +03:00
`core.AddExtension(std::make_shared<Extension>(compiled_library_file_name), "CPU");` .
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
To test that the extension is implemented correctly we can run the "mri_reconstruction_demo.py" with the following content:
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
@snippet mri_reconstruction_demo.py mri_demo:demo
2020-07-20 17:36:08 +03:00
2021-01-14 16:28:53 +03:00
The script can be executed using the following command line:
```bash
python3 mri_reconstruction_demo.py \
-m < PATH_TO_IR > /wnet_20.xml \
-i < PATH_TO_SAMPLE_MRI_IMAGE > .npy \
-p < Hybrid-CS-Model-MRI_repo > /Data/sampling_mask_20perc.npy \
-l < PATH_TO_BUILD_DIR > /libtemplate_extension.so \
-d CPU
```
2020-07-20 17:36:08 +03:00
## Additional Resources
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit ](https://software.intel.com/en-us/openvino-toolkit )
- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org ](https://docs.openvinotoolkit.org )
2020-09-30 14:00:19 +03:00
- [Model Optimizer Developer Guide ](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md )
2021-01-14 16:28:53 +03:00
- [Model Optimizer Extensibility ](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md )
Feature/azaytsev/merge to master (#2786)
* [IE CLDNN] Memory allocation optimizations (#2178)
* [GNA] Safety fixes (#2193)
* LSTMCell test [GNA] LSTMCell fix for GNA (#2216)
* [GNA] fix scale factor calculation for unfused bias after fc (2021.1) (#2195)
* [GNA] fix scale factor calculation for unfused bias after fc
* change check
* add test
* apply requested changes
* cpplint fix
* apply test changes
* modify model for test to match ::op::
* [LPT] Copy constant with several outputs before blob update (#2197)
* [LPT] Copy constant implementation
* [LPT] the same Constant ops as FQ interval boundaries
* [Scripts] Fixing issue with exporting path-like env when it undef (#2164)
* setupvars.sh: Added logic for exporting path env in case if it not defined
* setupvars: Removed duplicated colon
* Kept quotes where they were
* setupvars: updated copyrights
* FakeQuantize + Mul fusion (#2133)
* FQ+Mul fusion transform skeleton
* FQ+Mul fusion transform tests prep
* Basic UT for the transform
* Basic implementation of the transform
* Parametrized UTs for FQMul transform
* Parametrization of FQ+Mul UTs
* Make sure that the shapes of constants match
* Check if the mul constant matches FQ data
* CentOs compilation error fix
* PR feedback and adjusted tests
* NHWC layout of the mul constant
* UT: FQ output limits 4D
* Redundant CF pass removed
* Rewrite the graph in a different way
* Shape checking infrastructure skeleton
* Handle some negative cases
* Check the rt info in the fusion test
* Fuse all Mul nodes detected after FQ node
* Dont cast the original FQ node
* Dont throw if CF fails in new output range calculation
* More UTs
* Accept any type of input to FQ in the transformation
* Test the fusion when all FQ inputs are non-const
* Fusion test when only one output limit is const
* Extend error message (#2174)
* some nGraph KW fixes (#2176)
* Removed redundant methods
* Fixed KW for linux
* Fix QueryNetwork for networks with KSO (#2202)
* Added a test to reproduce QueryNetwork with KSO
* Fixed QueryNetwork for networks with KSO
* Added additional test
* Fixed output names for case with redundant ops before result (#2209)
* [IE][VPU]: Workaround to support parameter Beta for layer Swish (#2207)
* Workaround to full support Swish layer. It is faster than native Swish for now.
* [IE][VPU]: Remove the second call of ngraph::CommonOptimizations (#2221)
* Remove the second call of ngraph::CommonOptimizations in myriad plugin
* Reuse code with vpu ngraph transformations
* Duplicate PR 2167 for release branch: GatherTree description was extended and outdated link fixed (#2235)
* add more alrifications to description
* move clarification to comment
* pseudo code become more accurate
* review changes
* Add exposing function signatures via Cython (#2244)
* [DOC] Reshape feature (#2194)
* [IE][VPU][OpenCL] 2021.1 release compiler (#2189)
* Statically analyzed issues. (#2261)
* [IE][VPU]: Fix K propagation through Reshape (2021.1) (#2180)
* Fix K propagation through Reshape
* Add test cases
* Revert "[IE TESTS] dynavic batch for mvn layer (#1010)" (#2256)
This reverts commit 2e3378c50feb96df2bb8cb719bf0745705e35ad9.
* Fixed KW warning and review issues (#2262)
* [IE][VPU]: update firmware 1381 (#2236)
* Reverting devicePriorities to be vector and respect the order, as opposed to the incorrect (recent?) refactoring that introduced the unordered_map that effectively ignores the priorities (#2251)
* update OpenCV version to 4.5.0 (#2260)
* Add VPUX configuration to compile_tool (#2248)
* [IE][TESTS] Fix compareRawBuffers and compareBlobData methods (#2246)
Use `<=` comparison instead of `<` with thresholds.
This allows to use `0` threshold for bit-exact comparison.
* [IE][VPU]: KW fixes (#2186)
* Some KW fixes
* Fix printTo in vpu ngraph transformations
* Fix for static PartialShape detection algorithm (#2177)
* Fixes for Interpolate-4. (#2281)
* Update get_ov_update_message.py (#2286)
* Clone a specific tag for pybind11 (#2296)
* [Scripts] Fix setting PYTHONPATH logic (#2305)
* setupvars.sh: Added logic for exporting path env in case if it not defined
* setupvars: Removed duplicated colon
* install_openvino_dependencies: Updated copyrights
setupvars.bat: Updated notification about incorrect Python version. Removed checking ICC2019
setupvars.sh: Removed logic with choosing higher version of installed Python. Added dynamic detecting python3 major and minor version for setting path. Add checking minimum required Python version(now 3.6)
* Added python3-gi package and fixed libglib2.0-0 package location. (#2294)
* [IE TESTS] CoreThreading_LoadNetwork tests were disabled for GPU plugin (#2245) (#2283)
* setupvars: Updated notifications, fixed calling python in Windows case (#2318)
* Updated operations specification documents (2021.1) (#2268)
* Updated documentation structure and remove incorrect added files for Acosh-1, Asinh-1 and Atanh-1
* Fixed broken links
* Fixed c samples build (#2278) (#2304)
* Fixed c samples build
fixed CVS-38816 - Failure to build samples in C
* Fixed issue with gflags
* Revert "[IE][VPU]: Fix K propagation through Reshape (2021.1) (#2180)" (#2322)
This reverts commit d604a03ac0e639ba6dff3aa76f2830501a097fab.
* Added ONNX Resize-11 and ONNX Resize-13 to supported frameworks layers list. (#2325)
* Implement `run_executable.py` to run `TimeTests` several times (#2125) (#2188)
CI passed
* install_NEO_OCL_driver: Updated exit codes, messages. Updated way to remove old driver on Ubuntu (#2333)
* Bump cmake version to 3.13 (#2339)
* install_NEO_OCL_driver: Added checking of installed packages before trying to remove them. Added quotes for echo. (#2350)
* convert to doxygen comments
* add doxygen doc build configurations (#2191)
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
* [DOCS] Added an evaluate method for custom operation (#2272)
* Added an evaluate method for custom operation
* Fixed comments
* Downgrade cmake for samples (#2372)
* Downgrade cmake for samples
Downgraded cmake version to default version for Ubuntu 18.04
* Updated supported python version
The minimal python version in 2021.1 is 3.5
* Added notes about cmake requirements for samples and demo
* Install dependency refactoring. (#2381)
* Updated Transformation development doc (#2370)
* Delete xfail for resolved known issue (#2385)
* Fix layout links for dl streamer and c api (#2375)
* fix layouts
* change the dl-streamer link
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
* Added link options for cross-compilation (#2397)
* Added new GSG for macOS, made minor changes in Windows GSG (#2070) (#2405)
* Added new GSG for macOS, made minor changes in Windows GSG
* Update get_started_macos.md
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
* Fixed docs build on Windows (#2383)
* layouts and code comments
* Replace absolute links to docs.openvinotoolkit.org by relative ones (#2439)
* Replaced direct links to docs.openvinotoolkit.org with relative links
* Replaced direct links to docs.openvinotoolkit.org with relative links. Added GSGs for Win and macOS
* Minor fixes in GSGs
* Replaced direct links to docs.openvinotoolkit.org with relative links
* Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process
* Fixed Notes
* Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process
* fixed link to installing-openvino-linux.md
* Update the menu to align with POT doc headers (#2433)
* Update the menu to align with POT doc headers
It changes the menu to align with Post-training Optimization Toolkit documentation titles.
* Corrected one title
Run Examples => How to Run Examples
* Added closing braсket (#2466)
Fixed syntax error (b4b03b1)
* Remove the deprecation notice (#2314)
* Removed deprecation notice
* Removed the note from other files
* [DOCS] Update Installation Guide - GPU steps (#2308)
* Initial commit
* fixing lists
* Update installing-openvino-linux.md
* Get rid of the note
* Added the scrrenshot
* Update installing-openvino-linux.md
* fixes
* separate layout
* [Docs] Update MO What's new description (#2481)
* Azure CI: Add separated pipelines for Windows, Linux, Mac
* Feature/azaytsev/benchmarks 2021 1 (#2501)
* Initial changes for 2021.1
* Inserted Graphtool scripts, updated configurations info
* Updated FAQ and minor changes to performance_benchmarks.md
* Updated for 2021.1
* Updated
* incorporated review comments
* incorporated review comments for FAQ
* fixed link
* Update build-instruction.md for MacOsX (#2457)
* Update build-instruction.md for MacOsX
* Removed call of install_dependencies.sh from the steps
* Changed layouts
* Feature/azaytsev/cvs-38240 (#2469)
* Updated for 2020 version, replaced Ubuntu 16.04 with Ubuntu 20.04
* Updated the release package numbers
* Removed FPGA from the documentation
* Updated according to the comments in the ticket CVS-37827 (#2448)
* Updated according to CVS-38225
* some changes
* Update docs for speech libs and demos (#2518)
* Made changes to benchmarks according to review comments
* Remove `--collect_results_only` (#2523)
* Remove `--collect_results_only` from MemCheckTests
* Remove CLI keys from README
* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions
* Updated supported Intel® Core™ processors list
* Fixed table formatting
* [Jenkinsfile] Bump infra (#2546)
* [GNA] Documentation updates for 2021.1 (#2460)
* [GNA] Documentation updates for 2021.1
* Take Mike's comments into account
* More fixes according to review
* Fix processor generation names
* update api layouts
* Added new index page with overview
* Changed CMake and Python versions
* Fixed links
* some layout changes
* some layout changes
* nGraph Python API tutorial (#2500)
* nGraph Python API tutorial
* Tweaks
* Code review comments
* Code review comments
* some layout changes
* COnverted svg images to png
* layouts
* update layout
* Added a label for nGraph_Python_API.md
* fixed links
* Fixed image
* First draft of nGraph documentation (#2271)
* First draft of nGraph documentation
* updated according to review comments
* Updated
* Reviewed the nGraph Transformation section, added missing images
* Update nGraph_dg.md
* Delete python_api.md
Removed since there is already the nGraph_Python_API.md document with a comprehensive overview.
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: CCR\avladimi <anastasiya.ageeva@intel.com>
* Feature/azaytsev/docs 2021 1 (#2560)
* Removed FPGA from the documentation
* Updated according to CVS-38225
* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions
* Updated supported Intel® Core™ processors list
* Added new index page with overview
* Changed CMake and Python versions
* Fixed links
* COnverted svg images to png
* Added a label for nGraph_Python_API.md
* fixed links
* Fixed image
* Update SW requirements in build instructions and change latest release to 2021.1 (#2565)
* removed links to ../IE_DG/Introduction.md
* Removed links to tools overview page as removed
* some changes
* Remove link to Integrate_your_kernels_into_IE.md
* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed
* Fixed links to images (#2569)
* update layouts
* Added deprecation note for PassConfig class (#2593)
* Post-release fixes and installation path changes
* Added pip install documentation (#2465)
* Added pip install documentation
* Change references
* tiny fixes of links
* Update installing-openvino-pip.md
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
* Update OpenVino ONNX CI check (#2599)
* Update OpenVino ONNX CI
* Change parallel execution to single
* Enlarge timeout
* Remove timeout
* Add timeout to test execution
* Added PIP installation and Build from Source to the layout
* Fixed formatting issue, removed broken link
* Renamed section EXAMPLES to RESOURCES according to review comments
* add mo faq navigation by url param
* Skip hanging test case of OpenVino ONNX CI (#2608)
* Update OpenVino ONNX CI
* Change parallel execution to single
* Enlarge timeout
* Remove timeout
* Add timeout to test execution
* Skip hanging test
* Add description to skip issue
* Removed DLDT description
* Replaced wrong links
* MInor fix for path to the cpp samples
* fixes
* Update ops.py
* Fix style
* Improve pip installation guide (#2644)
* Improve pip installation guide
* Updated after comments
* Feature/ntyukaev/separate layout (#2629)
* convert to doxygen comments
* layouts and code comments
* separate layout
* Changed layouts
* Removed FPGA from the documentation
* Updated according to CVS-38225
* some changes
* Made changes to benchmarks according to review comments
* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions
* Updated supported Intel® Core™ processors list
* Fixed table formatting
* update api layouts
* Added new index page with overview
* Changed CMake and Python versions
* Fixed links
* some layout changes
* some layout changes
* some layout changes
* COnverted svg images to png
* layouts
* update layout
* Added a label for nGraph_Python_API.md
* fixed links
* Fixed image
* removed links to ../IE_DG/Introduction.md
* Removed links to tools overview page as removed
* some changes
* Remove link to Integrate_your_kernels_into_IE.md
* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed
* update layouts
* Post-release fixes and installation path changes
* Added PIP installation and Build from Source to the layout
* Fixed formatting issue, removed broken link
* Renamed section EXAMPLES to RESOURCES according to review comments
* add mo faq navigation by url param
* Removed DLDT description
* Replaced wrong links
* MInor fix for path to the cpp samples
* fixes
* Update ops.py
* Fix style
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: aalborov <alina.alborova@intel.com>
Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com>
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
* Fixed CVS-35316 (#2072) (#2670)
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
* [install_dependencies.sh] install latest cmake if current version is lower 3.13 (#2695) (#2701)
* [install_dependencies.sh] install latest cmake if current version is lower 3.13
* add shellcheck for Ubuntu
* install python 2.7 for Ubuntu
* Removed redundant file
* Exclude files that we didn't changed from merging
Co-authored-by: Sergey Shlyapnikov <sergey.shlyapnikov@intel.com>
Co-authored-by: Denis Orlov <denis.orlov@intel.com>
Co-authored-by: Kamil Magierski <kamil.magierski@intel.com>
Co-authored-by: Anna Alberska <anna.alberska@intel.com>
Co-authored-by: Edward Shogulin <edward.shogulin@intel.com>
Co-authored-by: Artyom Anokhov <artyom.anokhov@intel.com>
Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Roman Vyunov (Intel) <roman.vyunov@intel.com>
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com>
Co-authored-by: Svetlana Dolinina <svetlana.a.dolinina@intel.com>
Co-authored-by: Evgeny Talanin <evgeny.talanin@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Co-authored-by: Maxim Kurin <maxim.kurin@intel.com>
Co-authored-by: Nikolay Shchegolev <nikolay.shchegolev@intel.com>
Co-authored-by: Andrew Bakalin <andrew.bakalin@intel.com>
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
Co-authored-by: Evgeny Latkin <evgeny.latkin@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>
Co-authored-by: Alexander Novak <sasha-novak@yandex.ru>
Co-authored-by: Vladislav Vinogradov <vlad.vinogradov@intel.com>
Co-authored-by: Vladislav Volkov <vladislav.volkov@intel.com>
Co-authored-by: Vladimir Gavrilov <vladimir.gavrilov@intel.com>
Co-authored-by: Zoe Cayetano <zoe.cayetano@intel.com>
Co-authored-by: Dmitrii Denisov <dmitrii.denisov@intel.com>
Co-authored-by: Irina Efode <irina.efode@intel.com>
Co-authored-by: Evgeny Lazarev <evgeny.lazarev@intel.com>
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@intel.com>
Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Maksim Proshin <mvproshin@gmail.com>
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: azhogov <alexander.zhogov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Michał Karzyński <4430709+postrational@users.noreply.github.com>
Co-authored-by: Anton Romanov <anton.romanov@intel.com>
2020-10-27 00:41:46 +03:00
- [Inference Engine Extensibility Mechanism ](../IE_DG/Extensibility_DG/Intro.md )
2020-09-30 14:00:19 +03:00
- [Inference Engine Samples Overview ](../IE_DG/Samples_Overview.md )
- [Overview of OpenVINO™ Toolkit Pre-Trained Models ](@ref omz_models_intel_index )
2020-07-20 17:36:08 +03:00
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit ](https://github.com/intel-iot-devkit ).
## Converting Models:
- [Convert Your Caffe* Model ](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md )
2021-01-14 16:28:53 +03:00
- [Convert Your Kaldi* Model ](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md )
2020-07-20 17:36:08 +03:00
- [Convert Your TensorFlow* Model ](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md )
- [Convert Your MXNet* Model ](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md )
- [Convert Your ONNX* Model ](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md )