Updated list of supported operations. (#6981)

* Updated list of supported layers.

* Removed Crop, softsign from Kaldi list.

* Updated limitations.

* Corrected limitations.

* Updated limitations.

* Added Einsum, corrected Where.

* Apply suggestions from code review

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
This commit is contained in:
Anastasia Popova 2021-08-20 17:03:36 +03:00 committed by GitHub
parent 600eef24bc
commit ef84c90367
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -10,8 +10,11 @@ Standard Caffe\* layers:
| BN | No |
| BatchNorm | No |
| Bias | No |
| Binarization (Intel experimental) | No |
| Concat | No |
| Convolution | No |
| ConvolutionBinary | No |
| Crop | No |
| Deconvolution | No |
| DetectionOutput | No |
| Dropout | Not needed for inference |
@ -21,14 +24,25 @@ Standard Caffe\* layers:
| InnerProduct | No |
| Input | No |
| LRN | No |
| Normalize | No |
| Python | Supported only for the Python Proposal operation |
| Permute | No |
| Pooling | No |
| Power | No |
| PReLU | No |
| PriorBox | No |
| PriorBoxClustered | No |
| Proposal | No |
| PSROIPooling | No |
| ROIPooling | No |
| RegionYolo | No |
| ReorgYolo | No |
| ReLU | No |
| Resample | No |
| Reshape | No |
| Scale | No |
| ShuffleChannel | No |
| Sigmoid | No |
| Slice | No |
| Softmax | No |
| Tile | No |
@ -41,31 +55,44 @@ Standard MXNet\* symbols:
| Symbol Name in MXNet\*| Limitations|
| :----------| :----------|
| _Plus | No |
| _contrib_box_nms | No |
| _contrib_DeformableConvolution | No |
| _contrib_DeformablePSROIPooling | No |
| _contrib_MultiBoxDetection | "force_suppress" = 1 is not supported, non-default variances are not supported |
| _contrib_MultiBoxPrior | No |
| _contrib_Proposal | No |
| _copy | Not needed for inference |
| _div_scalar | No |
| _greater_scalar | No |
| _minus_scalar | No |
| _mul_scalar | No |
| _plus_scalar | No |
| _rnn_param_concat | No |
| _arange | No |
| _contrib_AdaptiveAvgPooling2D | Converted to the Average Pooling with fixed paddings |
| _maximum | No |
| _minimum | No |
| _np_roll | No |
| _zeros | No |
| add_n | No |
| arccosh | No |
| arcsinh | No |
| arctanh | No |
| broadcast_add | No |
| broadcast_div | No |
| broadcast_mul | No |
| broadcast_sub | No |
| BlockGrad | No |
| cumsum | No |
| div_scalar | No |
| elementwise_sub | No |
| elemwise_add | No |
| elemwise_mul | No |
| elemwise_sub | No |
| exp | No |
| expand_dims | No |
| greater_scalar | No |
| max | No |
| minus_scalar | No |
| null | Not needed for inference |
| repeat | No |
@ -74,9 +101,11 @@ Standard MXNet\* symbols:
| round | No |
| sigmoid | No |
| slice | No |
| SliceChannel | No |
| slice_axis | No |
| slice_channel | No |
| slice_like | No |
| softmax | No |
| stack | No |
| swapaxis | No |
| tile | No |
@ -100,6 +129,7 @@ Standard MXNet\* symbols:
| L2Normalization | only 4D input is supported |
| LRN | No |
| LeakyReLU | supported "act_type" = "prelu", "elu", "leaky", "gelu" |
| ones_like | No |
| Pad | No |
| Pooling | No |
| ROIPooling | No |
@ -113,6 +143,7 @@ Standard MXNet\* symbols:
| Tile | No |
| UpSampling | No |
| Where | No |
| zeros_like | No |
## TensorFlow\* Supported Operations
@ -123,18 +154,27 @@ Standard TensorFlow\* operations:
| Operation Name in TensorFlow\* | Limitations|
| :----------| :----------|
| Abs | No |
| Acosh | No |
| Add | No |
| AddV2 | No |
| AddN | No |
| All | No |
| ArgMax | No |
| ArgMin | No |
| Asinh | No |
| Assert | Not needed for inference |
| Assign | Not needed for inference |
| AssignSub | Not needed for inference |
| Atanh | No |
| AvgPool | No |
| AvgPoolV2 | Supported only for constant-foldable kernel_size and strides inputs |
| AvgPool3D | No |
| BatchMatMul | No |
| BatchMatMulV2 | No |
| BatchToSpaceND | No |
| BiasAdd | No |
| BlockLSTM | No |
| Bucketize | CPU only |
| BroadcastTo | No |
| Cast | No |
@ -144,14 +184,21 @@ Standard TensorFlow\* operations:
| Const | No |
| Conv2D | No |
| Conv2DBackpropInput | No |
| Conv3D | No |
| Conv3DBackpropInputV2 | No |
| Cos | No |
| Cosh | No |
| CropAndResize | "method" = "bilinear" only |
| CTCGreedyDecoder | Supported only with decoded indices output in a dense format |
| CTCLoss | Supported only with decoded indices input in a dense format |
| CumSum | No |
| DepthToSpace| No |
| DepthwiseConv2dNative| No |
| Einsum | Supported only with equation that does not contain repeated labels within a subscript |
| Elu | No |
| Enter | Supported only when it is fused to the TensorIterator layer |
| Equal | No |
| Erf | No |
| Exit | Supported only when it is fused to the TensorIterator layer |
| Exp | No |
| ExpandDims | No |
@ -163,34 +210,43 @@ Standard TensorFlow\* operations:
| FFT | Supported only when it is part of a sub-graph of the special form |
| FFT2D | Supported only when it is part of a sub-graph of the special form |
| FFT3D | Supported only when it is part of a sub-graph of the special form |
| FIFOQueueV2 | Supported only when it is part of a sub-graph of the special form |
| Fill | No |
| Floor | No |
| FloorDiv | No |
| FloorMod | No |
| FusedBatchNorm | No |
| FusedBatchNormV2 | No |
| FusedBatchNormV3 | No |
| Gather | No |
| GatherNd | No |
| GatherTree | No |
| GatherV2 | No |
| Greater | No |
| GreaterEqual | No |
| Identity | Not needed for shape inference |
| IdentityN | No |
| IFFT | Supported only when it is part of a sub-graph of the special form |
| IFFT2D | Supported only when it is part of a sub-graph of the special form |
| IFFT3D | Supported only when it is part of a sub-graph of the special form |
| IteratorGetNext | Supported only when it is part of a sub-graph of the special form |
| LRN | No |
| LeakyRelu | No |
| Less | No |
| LessEqual | No |
| Log | No |
| Log1p | No |
| LogicalAnd | No |
| LogicalOr | No |
| LogicalNot | No |
| LogSoftmax | No |
| LookupTableInsertV2 | Supported only when it is part of a sub-graph of the special form |
| LoopCond | Supported only when it is fused to the TensorIterator layer |
| MatMul | No |
| Max | No |
| MaxPool | No |
| MaxPoolV2 | Supported only for constant-foldable kernel_size and strides inputs |
| MaxPool3D | No |
| Maximum | No |
| Mean | No |
| Merge | Supported only when it is fused to the TensorIterator layer |
@ -200,9 +256,11 @@ Standard TensorFlow\* operations:
| Mul | No |
| Neg | No |
| NextIteration | Supported only when it is fused to the TensorIterator layer |
| NonMaxSuppressionV2 | No |
| NonMaxSuppressionV3 | No |
| NonMaxSuppressionV4 | No |
| NonMaxSuppressionV5 | No |
| NotEqual | No |
| NoOp | No |
| OneHot | No |
| Pack | No |
@ -211,9 +269,11 @@ Standard TensorFlow\* operations:
| Placeholder | No |
| PlaceholderWithDefault | No |
| Prod | No |
| QueueDequeueUpToV2 | Supported only when it is part of a sub-graph of the special form |
| Range | No |
| Rank | No |
| RealDiv | No |
| Reciprocal | No |
| Relu | No |
| Relu6 | No |
| Reshape | No |
@ -221,9 +281,12 @@ Standard TensorFlow\* operations:
| ResizeNearestNeighbor | No |
| ResourceGather| No |
| ReverseSequence | No |
| ReverseV2 | Supported only when can be converted to the ReverseSequence operation |
| Roll | No |
| Round | No |
| Pow | No |
| Rsqrt | No |
| Select | No |
| Shape | No |
| Sigmoid | No |
| Sin | No |
@ -234,6 +297,10 @@ Standard TensorFlow\* operations:
| Softplus | No |
| Softsign | No |
| SpaceToBatchND | No |
| SpaceToDepth | No |
| SparseFillEmptyRows | Supported only when it is part of a sub-graph of the special form |
| SparseReshape | Supported only when it is part of a sub-graph of the special form |
| SparseSegmentSum | Supported only when it is part of a sub-graph of the special form |
| SparseToDense | CPU only |
| Split | No |
| SplitV | No |
@ -242,11 +309,13 @@ Standard TensorFlow\* operations:
| SquaredDifference | No |
| Square| No |
| Squeeze | The case when squeeze axis is not specified is not supported |
| StatelessWhile | No |
| StopGradient | Not needed for shape inference |
| StridedSlice | Supported only for constant-foldable begin, end, and strides inputs |
| Sub | No |
| Sum | No |
| Swish | No |
| swish_f32 | No |
| Switch | Control flow propagation |
| Tan | No |
| Tanh | No |
@ -260,7 +329,9 @@ Standard TensorFlow\* operations:
| TopkV2 | No |
| Transpose | No |
| Unpack | No |
| Where | No |
| Variable | No |
| VariableV2 | No |
| Where | Supported only when it is part of a sub-graph of the special form |
| ZerosLike | No |
@ -356,13 +427,15 @@ Standard Kaldi\* Layers:
| :----------| :----------|
| addshift | No |
| affinecomponent | No |
| affinecomponentpreconditionedonline | No |
| affinetransform | No |
| backproptruncationcomponent | No |
| batchnormcomponent | No |
| clipgradientcomponent | Not needed for inference |
| concat | No |
| convolutional1dcomponent | No |
| convolutionalcomponent | No |
| copy | No |
| Crop | No |
| elementwiseproductcomponent | No |
| fixedaffinecomponent | No |
| fixedbiascomponent | No |
@ -383,9 +456,9 @@ Standard Kaldi\* Layers:
| rectifiedlinearcomponent | No |
| rescale | No |
| sigmoid | No |
| sigmoidcomponent | No |
| softmax | No |
| softmaxComponent | No |
| softsign | No |
| specaugmenttimemaskcomponent | Not needed for inference |
| splicecomponent | No |
| tanhcomponent | No |
@ -404,12 +477,14 @@ Standard ONNX\* operators:
| Acosh | No |
| Add | No |
| Affine | No |
| And | No |
| ArgMax | No |
| ArgMin | No |
| Asin | No |
| Asinh | No |
| Atan | No |
| Atanh | No |
| ATen | Supported only for the 'embedding_bag' operator |
| AveragePool | No |
| BatchMatMul | No |
| BatchNormalization | No |
@ -426,6 +501,7 @@ Standard ONNX\* operators:
| Cosh | No |
| Crop | No |
| CumSum | No |
| DepthToSpace | No |
| DequantizeLinear | No |
| DetectionOutput (Intel experimental) | No |
| Div | No |
@ -433,7 +509,14 @@ Standard ONNX\* operators:
| Elu | No |
| Equal | No |
| Erf | No |
| Exp | No |
| Expand | No |
| ExperimentalDetectronDetectionOutput (Intel experimental) | No |
| ExperimentalDetectronGenerateProposalsSingleImage (Intel experimental) | No |
| ExperimentalDetectronGroupNorm (Intel experimental) | No |
| ExperimentalDetectronPriorGridGenerator (Intel experimental) | No |
| ExperimentalDetectronROIFeatureExtractor (Intel experimental) | No |
| ExperimentalDetectronTopKROIs (Intel experimental) | No |
| FakeQuantize (Intel experimental) | No |
| Fill | No |
| Flatten | No |
@ -451,6 +534,7 @@ Standard ONNX\* operators:
| HardSigmoid | No |
| Identity | Not needed for inference |
| ImageScaler | No |
| InstanceNormalization | No |
| LRN | No |
| LSTM | Peepholes are not supported |
| LeakyRelu | No |
@ -461,7 +545,9 @@ Standard ONNX\* operators:
| LogicalOr | No |
| LogSoftmax | No |
| Loop | No |
| LpNormalization | No |
| MatMul | No |
| Max | No |
| MaxPool | No |
| MeanVarianceNormalization | Reduction over the batch dimension is not supported, reduction over all dimensions except batch and channel ones is obligatory |
| Min | No |
@ -475,6 +561,7 @@ Standard ONNX\* operators:
| Pad | No |
| Pow | No |
| PriorBox (Intel experimental) | No |
| PriorBoxClustered | No |
| QuantizeLinear | No |
| RNN | No |
| ROIAlign | No |
@ -506,6 +593,7 @@ Standard ONNX\* operators:
| Softplus | No |
| Softsign | No |
| SpaceToDepth | No |
| Split | No |
| Sqrt | No |
| Squeeze | The case when squeeze axis is not specified is not supported |
| Sub | No |