Enable ReduceL1 and ReduceL2 operations (#1799)
* Initial version of ReduceL1, ReduceL2 and ReduceLp enabling in the MO * Added operations ReduceL1 and ReduceL2 to nGraph * Removed ReduceLp. Added ReduceL1 and ReduceL2 * Separated specification of ReduceLp into ReduceL1 and ReduceL2 * Updated ReduceL1 and ReduceL2 specification * Fixed ReduceL1 and ReduceL2 type prop tests * Implemented nGraph transformation to decompose ReduceL1 and ReduceL2. Disabled them for CPU and GPU plugins * Updated supported framework layers * Added unit tests for ReduceL1 and ReduceL2 reference implementation * Fixed ReduceXXX operations reference implementation by adding support for a new parameter 'keep_dims' * Fixed constant folding for v0::Any * Added ReduceL1 and ReduceL2 to Python API * Implemented ReduceL1 and ReduceL2 decomposition tests and fixed ReduceL2 decomposition * Added specific creator for ReduceXXX operations instead of NodeBuilders * Fixed conversion ReduceXXX to CNNLayer * Fixed parser for ReduceLogicalXXX operations
This commit is contained in:
parent
7c9815b4c1
commit
125a462400
@ -150,6 +150,7 @@ Standard TensorFlow\* operations:
|
||||
| ExpandDims | No |
|
||||
| ExperimentalSparseWeightedSum | CPU only |
|
||||
| ExtractImagePatches | No |
|
||||
| EuclideanNorm | No |
|
||||
| Fill | No |
|
||||
| Floor | No |
|
||||
| FusedBatchNorm | No |
|
||||
@ -365,6 +366,8 @@ Standard ONNX\* operators:
|
||||
| ROIAlign | No |
|
||||
| Range | No |
|
||||
| Reciprocal | No |
|
||||
| ReduceL1 | No |
|
||||
| ReduceL2 | No |
|
||||
| ReduceMax | No |
|
||||
| ReduceMean | No |
|
||||
| ReduceMin | No |
|
||||
|
@ -101,7 +101,8 @@ declared in `namespace opset4`.
|
||||
* [Range](generation/Range_4.md)
|
||||
* [ReLU](activation/ReLU_1.md)
|
||||
* [ReadValue](infrastructure/ReadValue_3.md)
|
||||
* [ReduceLp](reduction/ReduceLp_4.md)
|
||||
* [ReduceL1](reduction/ReduceL1_4.md)
|
||||
* [ReduceL2](reduction/ReduceL2_4.md)
|
||||
* [ReduceLogicalAnd](reduction/ReduceLogicalAnd_1.md)
|
||||
* [ReduceLogicalOr](reduction/ReduceLogicalOr_1.md)
|
||||
* [ReduceMax](reduction/ReduceMax_1.md)
|
||||
|
@ -1,10 +1,10 @@
|
||||
## ReduceLp <a name="ReduceLp"></a> {#openvino_docs_ops_reduction_ReduceLp_4}
|
||||
## ReduceLp <a name="ReduceL1"></a> {#openvino_docs_ops_reduction_ReduceL1_4}
|
||||
|
||||
**Versioned name**: *ReduceLp-4*
|
||||
**Versioned name**: *ReduceL1-4*
|
||||
|
||||
**Category**: *Reduction*
|
||||
|
||||
**Short description**: *ReduceLp* operation performs reduction with finding the Lp norm of the 1st input tensor in slices specified by the 2nd input.
|
||||
**Short description**: *ReduceL1* operation performs reduction with finding the L1 norm (sum of absolute values) of the 1st input tensor in slices specified by the 2nd input.
|
||||
|
||||
**Attributes**
|
||||
|
||||
@ -20,9 +20,7 @@
|
||||
|
||||
* **1**: Input tensor x of type *T1*. **Required.**
|
||||
|
||||
* **2**: Scalar or 1D tensor of type *T_IND* with axis indices for the 1st input along which reduction is performed. Accepted range is `[-r, r-1]` where where `r` is the rank of input tensor, all values must be unique, repeats are not allowed. **Required.**
|
||||
|
||||
* **3**: Scalar of type *T2* with value order `p` of the normalization. Possible values: `1` for L1 or `2` for L2. **Required.**
|
||||
* **2**: Scalar or 1D tensor of type *T_IND* with axis indices for the 1st input along which reduction is performed. Accepted range is `[-r, r - 1]` where where `r` is the rank of input tensor, all values must be unique, repeats are not allowed. **Required.**
|
||||
|
||||
**Outputs**
|
||||
|
||||
@ -30,17 +28,17 @@
|
||||
|
||||
**Types**
|
||||
|
||||
* *T1*: any supported numeric type.
|
||||
* *T2*: any supported integer type.
|
||||
* *T_IND*: `int64` or `int32`.
|
||||
* *T1*: numeric type.
|
||||
* *T2*: `int64` or `int32`.
|
||||
|
||||
**Detailed Description**
|
||||
|
||||
Each element in the output is the result of reduction with finding a Lp norm operation along dimensions specified by the 2nd input:
|
||||
|
||||
`output[i0, i1, ..., iN] = Lp[j0,..., jN](x[j0, ..., jN]))`
|
||||
`output[i0, i1, ..., iN] = L1[j0,..., jN](x[j0, ..., jN]))`
|
||||
|
||||
Where indices i0, ..., iN run through all valid indices for the 1st input and finding the Lp norm `L1[j0, ..., jN]` have `jk = ik` for those dimensions `k` that are not in the set of indices specified by the 2nd input of the operation.
|
||||
|
||||
Where indices i0, ..., iN run through all valid indices for the 1st input and finding the Lp norm `Lp[j0, ..., jN]` have `jk = ik` for those dimensions `k` that are not in the set of indices specified by the 2nd input of the operation.
|
||||
Corner cases:
|
||||
|
||||
1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
|
||||
@ -49,7 +47,7 @@ Corner cases:
|
||||
**Example**
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceLp" ...>
|
||||
<layer id="1" type="ReduceL1" ...>
|
||||
<data keep_dims="True" />
|
||||
<input>
|
||||
<port id="0">
|
||||
@ -61,10 +59,9 @@ Corner cases:
|
||||
<port id="1">
|
||||
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
|
||||
</port>
|
||||
<port id="2"/>
|
||||
</input>
|
||||
<output>
|
||||
<port id="3">
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>1</dim>
|
||||
@ -75,7 +72,7 @@ Corner cases:
|
||||
```
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceLp" ...>
|
||||
<layer id="1" type="ReduceL1" ...>
|
||||
<data keep_dims="False" />
|
||||
<input>
|
||||
<port id="0">
|
||||
@ -87,10 +84,9 @@ Corner cases:
|
||||
<port id="1">
|
||||
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
|
||||
</port>
|
||||
<port id="2"/>
|
||||
</input>
|
||||
<output>
|
||||
<port id="3">
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
</port>
|
||||
@ -99,7 +95,7 @@ Corner cases:
|
||||
```
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceLp" ...>
|
||||
<layer id="1" type="ReduceL1" ...>
|
||||
<data keep_dims="False" />
|
||||
<input>
|
||||
<port id="0">
|
||||
@ -111,10 +107,9 @@ Corner cases:
|
||||
<port id="1">
|
||||
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
|
||||
</port>
|
||||
<port id="2"/>
|
||||
</input>
|
||||
<output>
|
||||
<port id="3">
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>10</dim>
|
||||
<dim>24</dim>
|
||||
@ -124,7 +119,7 @@ Corner cases:
|
||||
```
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceLp" ...>
|
||||
<layer id="1" type="ReduceL1" ...>
|
||||
<data keep_dims="False" />
|
||||
<input>
|
||||
<port id="0">
|
||||
@ -136,10 +131,9 @@ Corner cases:
|
||||
<port id="1">
|
||||
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
|
||||
</port>
|
||||
<port id="2"/>
|
||||
</input>
|
||||
<output>
|
||||
<port id="3">
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>24</dim>
|
143
docs/ops/reduction/ReduceL2_4.md
Normal file
143
docs/ops/reduction/ReduceL2_4.md
Normal file
@ -0,0 +1,143 @@
|
||||
## ReduceLp <a name="ReduceL2"></a> {#openvino_docs_ops_reduction_ReduceL2_4}
|
||||
|
||||
**Versioned name**: *ReduceL2-4*
|
||||
|
||||
**Category**: *Reduction*
|
||||
|
||||
**Short description**: *ReduceL2* operation performs reduction with finding the L2 norm (square root of sum of squares) of the 1st input tensor in slices specified by the 2nd input.
|
||||
|
||||
**Attributes**
|
||||
|
||||
* *keep_dims*
|
||||
|
||||
* **Description**: If set to `True` it holds axes that are used for reduction. For each such axis, output dimension is equal to 1.
|
||||
* **Range of values**: True or False
|
||||
* **Type**: `boolean`
|
||||
* **Default value**: False
|
||||
* **Required**: *no*
|
||||
|
||||
**Inputs**
|
||||
|
||||
* **1**: Input tensor x of type *T1*. **Required.**
|
||||
|
||||
* **2**: Scalar or 1D tensor of type *T_IND* with axis indices for the 1st input along which reduction is performed. Accepted range is `[-r, r - 1]` where where `r` is the rank of input tensor, all values must be unique, repeats are not allowed. **Required.**
|
||||
|
||||
**Outputs**
|
||||
|
||||
* **1**: Tensor of the same type as the 1st input tensor and `shape[i] = shapeOf(input1)[i]` for all `i` that is not in the list of axes from the 2nd input. For dimensions from the 2nd input tensor, `shape[i] == 1` if `keep_dims == True`, or `i`-th dimension is removed from the output otherwise.
|
||||
|
||||
**Types**
|
||||
|
||||
* *T1*: floating point type.
|
||||
* *T2*: `int64` or `int32`.
|
||||
|
||||
**Detailed Description**
|
||||
|
||||
Each element in the output is the result of reduction with finding a Lp norm operation along dimensions specified by the 2nd input:
|
||||
|
||||
`output[i0, i1, ..., iN] = L2[j0,..., jN](x[j0, ..., jN]))`
|
||||
|
||||
Where indices i0, ..., iN run through all valid indices for the 1st input and finding the Lp norm `L2[j0, ..., jN]` have `jk = ik` for those dimensions `k` that are not in the set of indices specified by the 2nd input of the operation.
|
||||
|
||||
Corner cases:
|
||||
|
||||
1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
|
||||
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction scalar value is calculated for entire input tensor.
|
||||
|
||||
**Example**
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceL2" ...>
|
||||
<data keep_dims="True" />
|
||||
<input>
|
||||
<port id="0">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>10</dim>
|
||||
<dim>24</dim>
|
||||
</port>
|
||||
<port id="1">
|
||||
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
|
||||
</port>
|
||||
</input>
|
||||
<output>
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>1</dim>
|
||||
<dim>1</dim>
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
```
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceL2" ...>
|
||||
<data keep_dims="False" />
|
||||
<input>
|
||||
<port id="0">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>10</dim>
|
||||
<dim>24</dim>
|
||||
</port>
|
||||
<port id="1">
|
||||
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
|
||||
</port>
|
||||
</input>
|
||||
<output>
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
```
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceL2" ...>
|
||||
<data keep_dims="False" />
|
||||
<input>
|
||||
<port id="0">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>10</dim>
|
||||
<dim>24</dim>
|
||||
</port>
|
||||
<port id="1">
|
||||
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
|
||||
</port>
|
||||
</input>
|
||||
<output>
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>10</dim>
|
||||
<dim>24</dim>
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
```
|
||||
|
||||
```xml
|
||||
<layer id="1" type="ReduceL2" ...>
|
||||
<data keep_dims="False" />
|
||||
<input>
|
||||
<port id="0">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>10</dim>
|
||||
<dim>24</dim>
|
||||
</port>
|
||||
<port id="1">
|
||||
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
|
||||
</port>
|
||||
</input>
|
||||
<output>
|
||||
<port id="2">
|
||||
<dim>6</dim>
|
||||
<dim>12</dim>
|
||||
<dim>24</dim>
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
```
|
@ -23,7 +23,7 @@
|
||||
#include <legacy/details/ie_cnn_network_tools.h>
|
||||
#include <ngraph/opsets/opset2.hpp>
|
||||
#include <ngraph/opsets/opset3.hpp>
|
||||
#include <ngraph/op/gelu.hpp>
|
||||
#include <ngraph/opsets/opset4.hpp>
|
||||
#include <ngraph/pass/manager.hpp>
|
||||
#include <generic_ie.hpp>
|
||||
#include <transformations/tensor_iterator_transformations/apply_transformations_to_ti_body.hpp>
|
||||
@ -90,7 +90,9 @@ InferenceEngine::ICNNNetwork::Ptr clDNNEngine::CloneAndTransformNetwork(const In
|
||||
return std::dynamic_pointer_cast<const ::ngraph::opset2::Gelu>(node) ||
|
||||
std::dynamic_pointer_cast<const ::ngraph::opset3::ShuffleChannels>(node) ||
|
||||
std::dynamic_pointer_cast<const ::ngraph::opset2::BatchToSpace>(node) ||
|
||||
std::dynamic_pointer_cast<const ::ngraph::opset2::SpaceToBatch>(node);
|
||||
std::dynamic_pointer_cast<const ::ngraph::opset2::SpaceToBatch>(node) ||
|
||||
std::dynamic_pointer_cast<const ::ngraph::opset4::ReduceL1>(node) ||
|
||||
std::dynamic_pointer_cast<const ::ngraph::opset4::ReduceL2>(node);
|
||||
};
|
||||
auto nGraphFunc = clonedNetwork->getFunction();
|
||||
// Disable shape inference (WA for generic operations)
|
||||
|
@ -525,6 +525,34 @@ InferenceEngine::details::CNNLayerCreator::CNNLayerCreator(const std::shared_ptr
|
||||
res->params = params;
|
||||
return res;
|
||||
});
|
||||
|
||||
addSpecificCreator({"ReduceMin", "ReduceMax", "ReduceMean", "ReduceProd", "ReduceSum", "ReduceL1", "ReduceL2"},
|
||||
[](const std::shared_ptr<::ngraph::Node>& node, const std::map<std::string, std::string> params) -> CNNLayerPtr {
|
||||
LayerParams attrs = {node->get_friendly_name(), node->description(), details::convertPrecision(node->get_output_element_type(0))};
|
||||
auto reduce_node = std::dynamic_pointer_cast<ngraph::op::util::ArithmeticReductionKeepDims>(node);
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(attrs);
|
||||
res->params = params;
|
||||
res->params["keep_dims"] = reduce_node->get_keep_dims() ? "True" : "False";
|
||||
return res;
|
||||
});
|
||||
|
||||
addSpecificCreator({"ReduceLogicalAnd"}, [](const std::shared_ptr<::ngraph::Node>& node, const std::map<std::string, std::string> params) -> CNNLayerPtr {
|
||||
LayerParams attrs = {node->get_friendly_name(), "ReduceAnd", details::convertPrecision(node->get_output_element_type(0))};
|
||||
auto reduce_node = std::dynamic_pointer_cast<ngraph::op::util::LogicalReductionKeepDims>(node);
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(attrs);
|
||||
res->params = params;
|
||||
res->params["keep_dims"] = reduce_node->get_keep_dims() ? "True" : "False";
|
||||
return res;
|
||||
});
|
||||
|
||||
addSpecificCreator({"ReduceLogicalOr"}, [](const std::shared_ptr<::ngraph::Node>& node, const std::map<std::string, std::string> params) -> CNNLayerPtr {
|
||||
LayerParams attrs = {node->get_friendly_name(), "ReduceOr", details::convertPrecision(node->get_output_element_type(0))};
|
||||
auto reduce_node = std::dynamic_pointer_cast<ngraph::op::util::LogicalReductionKeepDims>(node);
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(attrs);
|
||||
res->params = params;
|
||||
res->params["keep_dims"] = reduce_node->get_keep_dims() ? "True" : "False";
|
||||
return res;
|
||||
});
|
||||
}
|
||||
|
||||
CNNLayerPtr InferenceEngine::details::CNNLayerCreator::create() {
|
||||
@ -613,11 +641,6 @@ void convertFunctionToICNNNetwork(const std::shared_ptr<const ::ngraph::Function
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::ReLUIE>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::Range>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::ReverseSequence>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceMin>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceMax>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceMean>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceProd>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceSum>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::ResampleV2>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::RegionYolo>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::ReorgYolo>>(),
|
||||
@ -648,8 +671,6 @@ void convertFunctionToICNNNetwork(const std::shared_ptr<const ::ngraph::Function
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::HardSigmoid>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::HardSigmoid_IE>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::LogicalNot>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceLogicalAnd>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::ReduceLogicalOr>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::ShuffleChannels>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ExecGraphInfoSerialization::ExecutionNode>>(),
|
||||
};
|
||||
|
@ -1799,66 +1799,6 @@ CNNLayer::Ptr NodeConverter<ngraph::op::ReorgYolo>::createLayer(const std::share
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceMin>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceMin",
|
||||
details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
auto castedLayer = ngraph::as_type_ptr<ngraph::op::v1::ReduceMin>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "true" : "false";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceMax>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceMax",
|
||||
details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
auto castedLayer = ngraph::as_type_ptr<ngraph::op::v1::ReduceMax>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "true" : "false";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceMean>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceMean",
|
||||
details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
auto castedLayer = ngraph::as_type_ptr<ngraph::op::v1::ReduceMean>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "true" : "false";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceProd>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceProd",
|
||||
details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
auto castedLayer = ngraph::as_type_ptr<ngraph::op::v1::ReduceProd>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "true" : "false";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceSum>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceSum",
|
||||
details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
auto castedLayer = ngraph::as_type_ptr<ngraph::op::v1::ReduceSum>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "true" : "false";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::NormalizeL2>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
THROW_IE_EXCEPTION << "NormalizeL2 operation should be converted to NormalizeIE";
|
||||
@ -2099,30 +2039,6 @@ CNNLayer::Ptr NodeConverter<ngraph::op::v1::LogicalNot>::createLayer(const std::
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceLogicalAnd>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceAnd", details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
|
||||
auto castedLayer = std::dynamic_pointer_cast<ngraph::op::v1::ReduceLogicalAnd>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "True" : "False";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::ReduceLogicalOr>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "ReduceOr", details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::ReduceLayer>(params);
|
||||
|
||||
auto castedLayer = std::dynamic_pointer_cast<ngraph::op::v1::ReduceLogicalOr>(layer);
|
||||
if (castedLayer == nullptr) THROW_IE_EXCEPTION << "Cannot get " << params.type << " layer " << params.name;
|
||||
|
||||
res->params["keep_dims"] = castedLayer->get_keep_dims() ? "True" : "False";
|
||||
return res;
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::NonMaxSuppression>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
THROW_IE_EXCEPTION << "NonMaxSuppression operation must be converted to NonMaxSuppressionIE operation.";
|
||||
|
@ -32,7 +32,7 @@
|
||||
#include <ngraph/opsets/opset1.hpp>
|
||||
#include <ngraph/opsets/opset2.hpp>
|
||||
#include <ngraph/opsets/opset3.hpp>
|
||||
#include <ngraph/op/gelu.hpp>
|
||||
#include <ngraph/opsets/opset4.hpp>
|
||||
#include <ngraph/op/util/op_types.hpp>
|
||||
#include <ngraph/pass/manager.hpp>
|
||||
#include "ngraph_ops/fully_connected.hpp"
|
||||
@ -80,7 +80,9 @@ static void Transformation(ICNNNetwork::Ptr& clonedNetwork) {
|
||||
|
||||
return std::dynamic_pointer_cast<const ngraph::opset2::Gelu>(node) ||
|
||||
std::dynamic_pointer_cast<const ngraph::opset2::BatchToSpace>(node) ||
|
||||
std::dynamic_pointer_cast<const ngraph::opset2::SpaceToBatch>(node);
|
||||
std::dynamic_pointer_cast<const ngraph::opset2::SpaceToBatch>(node) ||
|
||||
std::dynamic_pointer_cast<const ngraph::opset4::ReduceL1>(node) ||
|
||||
std::dynamic_pointer_cast<const ngraph::opset4::ReduceL2>(node);
|
||||
};
|
||||
auto nGraphFunc = clonedNetwork->getFunction();
|
||||
// Disable shape inference (WA for generic operations)
|
||||
|
@ -330,11 +330,6 @@ std::shared_ptr<ngraph::Node> V10Parser::createNode(const std::vector<ngraph::Ou
|
||||
std::make_shared<LayerCreator<ngraph::op::Range>>("Range"),
|
||||
std::make_shared<LayerCreator<ngraph::op::PriorBox>>("PriorBox"),
|
||||
std::make_shared<LayerCreator<ngraph::op::PriorBoxClustered>>("PriorBoxClustered"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceMax>>("ReduceMax"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceMin>>("ReduceMin"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceMean>>("ReduceMean"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceProd>>("ReduceProd"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceSum>>("ReduceSum"),
|
||||
std::make_shared<LayerCreator<ngraph::op::ReorgYolo>>("ReorgYolo"),
|
||||
std::make_shared<LayerCreator<ngraph::op::RegionYolo>>("RegionYolo"),
|
||||
std::make_shared<LayerCreator<ngraph::op::Result>>("Result"),
|
||||
@ -362,8 +357,6 @@ std::shared_ptr<ngraph::Node> V10Parser::createNode(const std::vector<ngraph::Ou
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::LogicalOr>>("LogicalOr"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::LogicalXor>>("LogicalXor"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::LogicalNot>>("LogicalNot"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceLogicalAnd>>("ReduceLogicalAnd"),
|
||||
std::make_shared<LayerCreator<ngraph::op::v1::ReduceLogicalOr>>("ReduceLogicalOr"),
|
||||
};
|
||||
|
||||
// Check that operation in default opsets
|
||||
@ -1496,76 +1489,6 @@ std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::ReorgYolo>::cr
|
||||
return std::make_shared<ngraph::op::ReorgYolo>(inputs[0], ngraph::Strides {stride});
|
||||
}
|
||||
|
||||
// ReduceMin layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceMin>::createLayer(
|
||||
const ngraph::OutputVector& inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceMin>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims", false));
|
||||
}
|
||||
|
||||
// ReduceMax layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceMax>::createLayer(
|
||||
const ngraph::OutputVector& inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceMax>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims", false));
|
||||
}
|
||||
|
||||
// ReduceMean layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceMean>::createLayer(
|
||||
const ngraph::OutputVector& inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceMean>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims", false));
|
||||
}
|
||||
|
||||
// ReduceProd layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceProd>::createLayer(
|
||||
const ngraph::OutputVector& inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceProd>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims", false));
|
||||
}
|
||||
|
||||
// ReduceSum layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceSum>::createLayer(
|
||||
const ngraph::OutputVector& inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceSum>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims", false));
|
||||
}
|
||||
|
||||
// Transpose layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::Transpose>::createLayer(
|
||||
@ -2177,34 +2100,6 @@ std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::LogicalNot
|
||||
return std::make_shared<ngraph::op::v1::LogicalNot>(inputs[0]);
|
||||
}
|
||||
|
||||
// ReduceLogicalAnd layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceLogicalAnd>::createLayer(
|
||||
const ngraph::OutputVector & inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceLogicalAnd>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims"));
|
||||
}
|
||||
|
||||
// ReduceLogicalOr layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::ReduceLogicalOr>::createLayer(
|
||||
const ngraph::OutputVector & inputs, const pugi::xml_node& node, std::istream& binStream,
|
||||
const GenericLayerParams& layerParsePrms) {
|
||||
checkParameters(inputs, layerParsePrms, 2);
|
||||
pugi::xml_node dn = node.child("data");
|
||||
|
||||
if (dn.empty())
|
||||
THROW_IE_EXCEPTION << "Cannot read parameter for " << getType() << " layer with name: " << layerParsePrms.name;
|
||||
|
||||
return std::make_shared<ngraph::op::v1::ReduceLogicalOr>(inputs[0], inputs[1], GetBoolAttr(dn, "keep_dims"));
|
||||
}
|
||||
|
||||
// NonMaxSuppression layer
|
||||
template <>
|
||||
std::shared_ptr<ngraph::Node> V10Parser::LayerCreator<ngraph::op::v1::NonMaxSuppression>::createLayer(
|
||||
|
@ -0,0 +1,31 @@
|
||||
// Copyright (C) 2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <vector>
|
||||
#include <memory>
|
||||
|
||||
#include <transformations_visibility.hpp>
|
||||
|
||||
#include <ngraph/ngraph.hpp>
|
||||
#include <ngraph/pass/graph_rewrite.hpp>
|
||||
#include "ngraph/pattern/matcher.hpp"
|
||||
|
||||
namespace ngraph {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API ReduceL1Decomposition;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ngraph
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief Decomposes ReduceL1 into ReduceSum(abs(x)).
|
||||
*/
|
||||
class ngraph::pass::ReduceL1Decomposition: public ngraph::pass::MatcherPass {
|
||||
public:
|
||||
ReduceL1Decomposition();
|
||||
};
|
@ -0,0 +1,31 @@
|
||||
// Copyright (C) 2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <vector>
|
||||
#include <memory>
|
||||
|
||||
#include <transformations_visibility.hpp>
|
||||
|
||||
#include <ngraph/ngraph.hpp>
|
||||
#include <ngraph/pass/graph_rewrite.hpp>
|
||||
#include "ngraph/pattern/matcher.hpp"
|
||||
|
||||
namespace ngraph {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API ReduceL2Decomposition;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ngraph
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief Decomposes ReduceL2 into sqrt(ReduceSum(x * x)).
|
||||
*/
|
||||
class ngraph::pass::ReduceL2Decomposition: public ngraph::pass::MatcherPass {
|
||||
public:
|
||||
ReduceL2Decomposition();
|
||||
};
|
@ -47,6 +47,8 @@
|
||||
#include <transformations/convert_opset1_to_legacy/convert_hard_sigmoid_to_hard_sigmoid_ie.hpp>
|
||||
#include <transformations/lin_op_sequence_fusoin.hpp>
|
||||
#include <transformations/common_optimizations/conv_mul_fusion.hpp>
|
||||
#include <transformations/reduce_l1_decomposition.hpp>
|
||||
#include <transformations/reduce_l2_decomposition.hpp>
|
||||
|
||||
#include <ngraph/pass/constant_folding.hpp>
|
||||
#include <ngraph/pass/manager.hpp>
|
||||
@ -64,6 +66,11 @@ bool ngraph::pass::ConvertOpSet1ToLegacy::run_on_function(std::shared_ptr<ngraph
|
||||
|
||||
manager.register_pass<ngraph::pass::ConstantFolding>();
|
||||
|
||||
// the following two transformations produce ReduceSum operations so they
|
||||
// must be executed before the ConvertReduceSumToPooling transformation
|
||||
manager.register_pass<ngraph::pass::ReduceL1Decomposition>();
|
||||
manager.register_pass<ngraph::pass::ReduceL2Decomposition>();
|
||||
|
||||
// List if Decomposition and Conversion transformations that can be
|
||||
// applied simultaneously in a single graph traversal
|
||||
auto decomp = manager.register_pass<ngraph::pass::GraphRewrite>();
|
||||
|
@ -0,0 +1,38 @@
|
||||
// Copyright (C) 2018-2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include "transformations/reduce_l1_decomposition.hpp"
|
||||
|
||||
#include <memory>
|
||||
|
||||
#include <ngraph/opsets/opset4.hpp>
|
||||
#include <ngraph/rt_info.hpp>
|
||||
#include <ngraph/pattern/op/wrap_type.hpp>
|
||||
|
||||
ngraph::pass::ReduceL1Decomposition::ReduceL1Decomposition() {
|
||||
// decomposes ReduceL1 operations into ReduceSum(abs(x))
|
||||
auto reduce_l1 = ngraph::pattern::wrap_type<opset4::ReduceL1>();
|
||||
|
||||
ngraph::matcher_pass_callback callback = [=](ngraph::pattern::Matcher &m) {
|
||||
auto &pattern_to_output = m.get_pattern_value_map();
|
||||
auto reduce_l1_node = std::dynamic_pointer_cast<ngraph::opset4::ReduceL1>(pattern_to_output.at(reduce_l1).get_node_shared_ptr());
|
||||
|
||||
if (m_transformation_callback(reduce_l1_node)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
auto abs = std::make_shared<ngraph::opset4::Abs>(reduce_l1_node->input_value(0));
|
||||
auto reduce_sum = std::make_shared<ngraph::opset4::ReduceSum>(abs, reduce_l1_node->input_value(1), reduce_l1_node->get_keep_dims());
|
||||
|
||||
reduce_sum->set_friendly_name(m.get_match_root()->get_friendly_name());
|
||||
ngraph::copy_runtime_info(reduce_l1_node,
|
||||
{abs, reduce_sum});
|
||||
ngraph::replace_node(m.get_match_root(), reduce_sum);
|
||||
return true;
|
||||
};
|
||||
|
||||
auto m = std::make_shared<ngraph::pattern::Matcher>(reduce_l1, "ReduceL1Decomposition");
|
||||
register_matcher(m, callback);
|
||||
}
|
||||
|
@ -0,0 +1,39 @@
|
||||
// Copyright (C) 2018-2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include "transformations/reduce_l2_decomposition.hpp"
|
||||
|
||||
#include <memory>
|
||||
|
||||
#include <ngraph/opsets/opset4.hpp>
|
||||
#include <ngraph/rt_info.hpp>
|
||||
#include <ngraph/pattern/op/wrap_type.hpp>
|
||||
|
||||
ngraph::pass::ReduceL2Decomposition::ReduceL2Decomposition() {
|
||||
// decomposes ReduceL2 operations into sqrt(ReduceSum(x * x))
|
||||
auto reduce_l2 = ngraph::pattern::wrap_type<opset4::ReduceL2>();
|
||||
|
||||
ngraph::matcher_pass_callback callback = [=](ngraph::pattern::Matcher &m) {
|
||||
auto &pattern_to_output = m.get_pattern_value_map();
|
||||
auto reduce_l2_node = std::dynamic_pointer_cast<ngraph::opset4::ReduceL2>(pattern_to_output.at(reduce_l2).get_node_shared_ptr());
|
||||
|
||||
if (m_transformation_callback(reduce_l2_node)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
auto const_2 = ngraph::opset4::Constant::create(reduce_l2_node->input_value(0).get_element_type(), Shape{}, {2.0f});
|
||||
auto square = std::make_shared<ngraph::opset4::Power>(reduce_l2_node->input_value(0), const_2);
|
||||
auto reduce_sum = std::make_shared<ngraph::opset4::ReduceSum>(square, reduce_l2_node->input_value(1), reduce_l2_node->get_keep_dims());
|
||||
auto sqrt = std::make_shared<ngraph::opset4::Sqrt>(reduce_sum);
|
||||
reduce_sum->set_friendly_name(m.get_match_root()->get_friendly_name());
|
||||
ngraph::copy_runtime_info(reduce_l2_node,
|
||||
{sqrt, reduce_sum, square, const_2});
|
||||
ngraph::replace_node(m.get_match_root(), sqrt);
|
||||
return true;
|
||||
};
|
||||
|
||||
auto m = std::make_shared<ngraph::pattern::Matcher>(reduce_l2, "ReduceL2Decomposition");
|
||||
register_matcher(m, callback);
|
||||
}
|
||||
|
@ -0,0 +1,48 @@
|
||||
// Copyright (C) 2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include <string>
|
||||
#include <memory>
|
||||
|
||||
#include <ngraph/function.hpp>
|
||||
#include <ngraph/opsets/opset4.hpp>
|
||||
#include <ngraph/pass/manager.hpp>
|
||||
#include <transformations/reduce_l1_decomposition.hpp>
|
||||
#include <transformations/init_node_info.hpp>
|
||||
#include <transformations/utils/utils.hpp>
|
||||
|
||||
#include "common_test_utils/ngraph_test_utils.hpp"
|
||||
|
||||
using namespace testing;
|
||||
|
||||
TEST(TransformationTests, ReduceL1DecompositionTest) {
|
||||
std::shared_ptr<ngraph::Function> f(nullptr), f_ref(nullptr);
|
||||
{
|
||||
auto data = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::f32, ngraph::PartialShape::dynamic(1));
|
||||
auto axes = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::i32, ngraph::Shape{1});
|
||||
auto reduce_l1 = std::make_shared<ngraph::opset4::ReduceL1>(data, axes, true);
|
||||
|
||||
f = std::make_shared<ngraph::Function>(ngraph::NodeVector{reduce_l1}, ngraph::ParameterVector{data, axes});
|
||||
|
||||
ngraph::pass::Manager manager;
|
||||
manager.register_pass<ngraph::pass::InitNodeInfo>();
|
||||
manager.register_pass<ngraph::pass::ReduceL1Decomposition>();
|
||||
manager.run_passes(f);
|
||||
ASSERT_NO_THROW(check_rt_info(f));
|
||||
}
|
||||
|
||||
{
|
||||
auto data = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::f32, ngraph::PartialShape::dynamic(1));
|
||||
auto axes = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::i32, ngraph::Shape{1});
|
||||
auto abs = std::make_shared<ngraph::opset4::Abs>(data);
|
||||
auto reduce_l1 = std::make_shared<ngraph::opset4::ReduceSum>(abs, axes, true);
|
||||
|
||||
f_ref = std::make_shared<ngraph::Function>(ngraph::NodeVector{reduce_l1}, ngraph::ParameterVector{data, axes});
|
||||
}
|
||||
|
||||
auto res = compare_functions(f, f_ref);
|
||||
ASSERT_TRUE(res.first) << res.second;
|
||||
}
|
@ -0,0 +1,49 @@
|
||||
// Copyright (C) 2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include <string>
|
||||
#include <memory>
|
||||
|
||||
#include <ngraph/function.hpp>
|
||||
#include <ngraph/opsets/opset4.hpp>
|
||||
#include <ngraph/pass/manager.hpp>
|
||||
#include <transformations/reduce_l2_decomposition.hpp>
|
||||
#include <transformations/init_node_info.hpp>
|
||||
#include <transformations/utils/utils.hpp>
|
||||
|
||||
#include "common_test_utils/ngraph_test_utils.hpp"
|
||||
|
||||
using namespace testing;
|
||||
|
||||
TEST(TransformationTests, ReduceL2DecompositionTest) {
|
||||
std::shared_ptr<ngraph::Function> f(nullptr), f_ref(nullptr);
|
||||
{
|
||||
auto data = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::f32, ngraph::PartialShape::dynamic(1));
|
||||
auto axes = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::i32, ngraph::Shape{1});
|
||||
auto reduce_l1 = std::make_shared<ngraph::opset4::ReduceL2>(data, axes, true);
|
||||
|
||||
f = std::make_shared<ngraph::Function>(ngraph::NodeVector{reduce_l1}, ngraph::ParameterVector{data, axes});
|
||||
|
||||
ngraph::pass::Manager manager;
|
||||
manager.register_pass<ngraph::pass::InitNodeInfo>();
|
||||
manager.register_pass<ngraph::pass::ReduceL2Decomposition>();
|
||||
manager.run_passes(f);
|
||||
ASSERT_NO_THROW(check_rt_info(f));
|
||||
}
|
||||
|
||||
{
|
||||
auto data = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::f32, ngraph::PartialShape::dynamic(1));
|
||||
auto axes = std::make_shared<ngraph::opset4::Parameter>(ngraph::element::i32, ngraph::Shape{1});
|
||||
auto pow = std::make_shared<ngraph::opset4::Power>(data, ngraph::opset4::Constant::create(ngraph::element::f32, ngraph::Shape{}, {2.0}));
|
||||
auto reduce_sum = std::make_shared<ngraph::opset4::ReduceSum>(pow, axes, true);
|
||||
auto sqrt = std::make_shared<ngraph::opset4::Sqrt>(reduce_sum);
|
||||
|
||||
f_ref = std::make_shared<ngraph::Function>(ngraph::NodeVector{sqrt}, ngraph::ParameterVector{data, axes});
|
||||
}
|
||||
|
||||
auto res = compare_functions(f, f_ref);
|
||||
ASSERT_TRUE(res.first) << res.second;
|
||||
}
|
@ -38,7 +38,7 @@ extensions/back/PackBinaryWeights.py
|
||||
extensions/back/pass_separator.py
|
||||
extensions/back/priorbox_mutation.py
|
||||
extensions/back/ProposalMutation.py
|
||||
extensions/back/ReduceToPooling.py
|
||||
extensions/back/ReduceMerge.py
|
||||
extensions/back/ReduceTransposeDimensions.py
|
||||
extensions/back/remove_last_softmax_pattern.py
|
||||
extensions/back/RemoveUselessConvert.py
|
||||
@ -291,12 +291,7 @@ extensions/front/onnx/quantize_ext.py
|
||||
extensions/front/onnx/quantize_linear_ext.py
|
||||
extensions/front/onnx/quantize_linear_resolver.py
|
||||
extensions/front/onnx/range_ext.py
|
||||
extensions/front/onnx/reduce_l2_ext.py
|
||||
extensions/front/onnx/reduce_max_ext.py
|
||||
extensions/front/onnx/reduce_mean_ext.py
|
||||
extensions/front/onnx/reduce_min_ext.py
|
||||
extensions/front/onnx/reduce_prod_ext.py
|
||||
extensions/front/onnx/reduce_sum_ext.py
|
||||
extensions/front/onnx/reduce_ext.py
|
||||
extensions/front/onnx/remove_filtering_boxes_by_size.py
|
||||
extensions/front/onnx/resize_ext.py
|
||||
extensions/front/onnx/resize_to_interpolate.py
|
||||
@ -327,7 +322,6 @@ extensions/front/PowerToEltwises.py
|
||||
extensions/front/rank_decomposer.py
|
||||
extensions/front/reciprocal.py
|
||||
extensions/front/reduce_axis_normalizer.py
|
||||
extensions/front/ReduceL2Decomposition.py
|
||||
extensions/front/reshape_dim_normalizer.py
|
||||
extensions/front/restore_ports.py
|
||||
extensions/front/scatter_normalizer.py
|
||||
|
@ -18,7 +18,7 @@ from typing import Dict
|
||||
import numpy as np
|
||||
|
||||
from extensions.back.FuseTransposesSequence import FuseTransposesSequence
|
||||
from extensions.back.ReduceToPooling import ReduceMerge
|
||||
from extensions.back.ReduceMerge import ReduceMerge
|
||||
from extensions.ops.ReduceOps import reduce_map
|
||||
from extensions.ops.gather import Gather
|
||||
from mo.back.replacement import BackReplacementPattern
|
||||
|
@ -1,48 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
from extensions.front.reduce_axis_normalizer import ReduceAxisNormalizer
|
||||
from extensions.ops.ReduceOps import ReduceSum
|
||||
from extensions.ops.elementwise import Pow, Mul
|
||||
from mo.front.common.partial_infer.utils import int64_array, float_array
|
||||
from mo.front.common.replacement import FrontReplacementOp
|
||||
from mo.front.tf.graph_utils import create_op_with_const_inputs
|
||||
from mo.graph.graph import Graph, Node, rename_node
|
||||
|
||||
|
||||
class ReduceL2Decomposition(FrontReplacementOp):
|
||||
op = 'ReduceL2'
|
||||
enabled = True
|
||||
|
||||
def run_before(self):
|
||||
return [ReduceAxisNormalizer]
|
||||
|
||||
def replace_op(self, graph: Graph, node: Node):
|
||||
node_name = node.soft_get('name', node.id)
|
||||
|
||||
rename_node(node, node_name + '/TBR')
|
||||
sqr_node = Mul(graph, {}).create_node()
|
||||
reduce_sum_node = ReduceSum(graph, {'keep_dims': node.soft_get('keep_dims', 0),
|
||||
'axis': node.soft_get('axis', None)}).create_node()
|
||||
sqrt_node = create_op_with_const_inputs(graph, Pow, {1: float_array(0.5)})
|
||||
rename_node(sqrt_node, node_name)
|
||||
|
||||
# Connect nodes
|
||||
node.in_port(0).get_connection().set_destination(sqr_node.in_port(0))
|
||||
sqr_node.in_port(0).get_connection().add_destination(sqr_node.in_port(1))
|
||||
sqr_node.out_port(0).connect(reduce_sum_node.in_port(0))
|
||||
reduce_sum_node.out_port(0).connect(sqrt_node.in_port(0))
|
||||
|
||||
return [sqrt_node.id]
|
@ -1,63 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
import unittest
|
||||
|
||||
import numpy as np
|
||||
|
||||
from extensions.front.ReduceL2Decomposition import ReduceL2Decomposition
|
||||
from mo.utils.ir_engine.compare_graphs import compare_graphs
|
||||
from mo.utils.unittest.graph import build_graph, const
|
||||
|
||||
nodes_attributes = {
|
||||
'input': {'shape': None, 'type': 'Parameter', 'kind': 'op', 'op': 'Parameter'},
|
||||
'reduce_l2': {'type': None, 'kind': 'op', 'op': 'ReduceL2', 'axis': 0, 'name': 'my_reduce', 'keep_dims': 0},
|
||||
'result': {'type': 'Result', 'value': None, 'kind': 'op', 'op': 'Result'},
|
||||
|
||||
# new layers
|
||||
'mul': {'type': 'Multiply', 'kind': 'op', 'op': 'Mul'},
|
||||
'reduce_sum': {'type': 'ReduceSum', 'kind': 'op', 'op': 'ReduceSum', 'axis': 0, 'keep_dims': 0},
|
||||
'pow': {'type': 'Power', 'kind': 'op', 'op': 'Pow'},
|
||||
**const('half', np.array(0.5, dtype=np.float32)),
|
||||
}
|
||||
|
||||
|
||||
class ReduceL2DecompositionTest(unittest.TestCase):
|
||||
def test(self):
|
||||
graph = build_graph(nodes_attributes,
|
||||
[('input', 'reduce_l2', {'in': 0, 'out': 0}),
|
||||
('reduce_l2', 'result', {'in': 0, 'out': 0}),
|
||||
],
|
||||
{}, nodes_with_edges_only=True)
|
||||
|
||||
graph_ref = build_graph(nodes_attributes,
|
||||
[('input', 'mul', {'in': 0, 'out': 0}),
|
||||
('input', 'mul', {'in': 1, 'out': 0}),
|
||||
('mul', 'reduce_sum', {'in': 0, 'out': 0}),
|
||||
('reduce_sum', 'pow', {'in': 0, 'out': 0}),
|
||||
('half', 'pow', {'in': 1, 'out': 0}),
|
||||
('pow', 'result', {'in': 0, 'out': 0}),
|
||||
],
|
||||
{}, nodes_with_edges_only=True)
|
||||
|
||||
graph.graph['layout'] = 'NCHW'
|
||||
graph.stage = 'front'
|
||||
|
||||
ReduceL2Decomposition().find_and_replace_pattern(graph)
|
||||
|
||||
(flag, resp) = compare_graphs(graph, graph_ref, 'result', check_op_attrs=True)
|
||||
self.assertTrue(flag, resp)
|
||||
self.assertTrue(graph.node[graph.get_nodes_with_attributes(op='Pow')[0]]['name'] == 'my_reduce')
|
97
model-optimizer/extensions/front/onnx/reduce_ext.py
Normal file
97
model-optimizer/extensions/front/onnx/reduce_ext.py
Normal file
@ -0,0 +1,97 @@
|
||||
"""
|
||||
Copyright (C) 2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
from extensions.ops.ReduceOps import ReduceL1, ReduceL2, ReduceMax, ReduceMean, ReduceMin, ReduceProd, ReduceSum
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
def update_reduce_node_attrs_with(node: Node, c: callable):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
c.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
|
||||
|
||||
class ReduceL1Extractor(FrontExtractorOp):
|
||||
op = 'ReduceL1'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceL1)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class ReduceL2Extractor(FrontExtractorOp):
|
||||
op = 'ReduceL2'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceL2)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class ReduceMaxFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceMax'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceMax)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class ReduceMeanFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceMean'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceMean)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class ReduceMinFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceMin'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceMin)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class ReduceProdFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceProd'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceProd)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class ReduceSumFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceSum'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
update_reduce_node_attrs_with(node, ReduceSum)
|
||||
return cls.enabled
|
@ -1,33 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
from extensions.front.reduce_axis_normalizer import ReduceAxisNormalizer
|
||||
from extensions.ops.ReduceOps import ReduceL2
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
class ReduceL2FrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceL2'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
ReduceL2.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
return cls.enabled
|
@ -1,33 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
from extensions.ops.ReduceOps import ReduceMax
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
class ReduceMaxFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceMax'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
ReduceMax.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
return cls.enabled
|
@ -1,33 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
from extensions.ops.ReduceOps import ReduceMean
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
class ReduceMeanFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceMean'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
ReduceMean.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
return cls.enabled
|
@ -1,33 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
from extensions.ops.ReduceOps import ReduceMin
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
class ReduceMinFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceMin'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
ReduceMin.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
return cls.enabled
|
@ -1,33 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
from extensions.ops.ReduceOps import ReduceProd
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
class ReduceProdFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceProd'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
ReduceProd.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
return cls.enabled
|
@ -1,33 +0,0 @@
|
||||
"""
|
||||
Copyright (C) 2018-2020 Intel Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
from extensions.ops.ReduceOps import ReduceSum
|
||||
from mo.front.common.partial_infer.utils import int64_array
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.front.onnx.extractors.utils import onnx_attr
|
||||
from mo.graph.graph import Node
|
||||
|
||||
|
||||
class ReduceSumFrontExtractor(FrontExtractorOp):
|
||||
op = 'ReduceSum'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
axis = onnx_attr(node, 'axes', 'ints', default=None, dst_type=lambda x: int64_array(x))
|
||||
keep_dims = onnx_attr(node, 'keepdims', 'i', default=True)
|
||||
ReduceSum.update_node_stat(node, {'axis': axis, 'keep_dims': keep_dims})
|
||||
return cls.enabled
|
@ -46,16 +46,17 @@ class ReduceAxisNormalizer(FrontReplacementSubgraph):
|
||||
node = match['reduce']
|
||||
connected_in_ports = [port for port in node.in_ports().values() if not port.disconnected()]
|
||||
if len(connected_in_ports) == 1:
|
||||
node_name = node.soft_get('name', node.id)
|
||||
|
||||
# if the 'axis' is None then we still add a second input to the layer with a 1D array with 1 element equal
|
||||
# to None. The infer function handles this case because the input shape is known at this stage only
|
||||
if node.has('axis'):
|
||||
const = Const(graph, {'value': node.axis}).create_node()
|
||||
const = Const(graph, {'name': node_name + '/axis', 'value': node.axis}).create_node()
|
||||
node.add_input_port(1, skip_if_exist=True)
|
||||
const.out_port(0).connect(node.in_port(1))
|
||||
del graph.node[node.id]['axis']
|
||||
else:
|
||||
# The default (if there is no 'axis') is to reduce over all the dimensions of the input tensor.
|
||||
node_name = node.name
|
||||
|
||||
begin_of_range = Const(graph, dict(name=node_name + '/range_begin_', value=0)).create_node()
|
||||
step = Const(graph, dict(name=node_name + '/range_step_', value=1)).create_node()
|
||||
|
@ -13,7 +13,7 @@
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
from extensions.ops.ReduceOps import ReduceProd, ReduceAnd, ReduceMax, ReduceMean, ReduceSum
|
||||
from extensions.ops.ReduceOps import ReduceProd, ReduceAnd, ReduceMax, ReduceMean, ReduceSum, ReduceL2
|
||||
from mo.front.extractor import FrontExtractorOp
|
||||
from mo.graph.graph import Node
|
||||
|
||||
@ -67,3 +67,13 @@ class SumFrontExtractor(FrontExtractorOp):
|
||||
def extract(cls, node: Node):
|
||||
ReduceSum.update_node_stat(node, {'keep_dims': node.pb.attr["keep_dims"].b})
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class EuclideanNormFrontExtractor(FrontExtractorOp):
|
||||
op = 'EuclideanNorm'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node: Node):
|
||||
ReduceL2.update_node_stat(node, {'keep_dims': node.pb.attr["keep_dims"].b})
|
||||
return cls.enabled
|
||||
|
@ -24,6 +24,7 @@ from mo.ops.op import Op
|
||||
reduce_map = {
|
||||
'ReduceSum': np.sum,
|
||||
'ReduceProd': np.prod,
|
||||
'ReduceL1': lambda x, axis, keepdims: np.sum(a=np.absolute(x), axis=axis, keepdims=keepdims),
|
||||
'ReduceL2': lambda x, axis, keepdims: np.sqrt(np.sum(a=np.square(x), axis=axis, keepdims=keepdims)),
|
||||
'ReduceMax': np.max,
|
||||
'ReduceMin': np.min,
|
||||
@ -86,12 +87,13 @@ class ReduceOp(Op):
|
||||
enabled = False
|
||||
op = None
|
||||
op_type = None
|
||||
version = 'opset1'
|
||||
|
||||
def __init__(self, graph: Graph, attrs: dict):
|
||||
super().__init__(graph, {
|
||||
'op': self.op,
|
||||
'type': self.op_type,
|
||||
'version': 'opset1',
|
||||
'version': self.version,
|
||||
'infer': reduce_infer,
|
||||
'keep_dims': 0,
|
||||
'in_ports_count': 2,
|
||||
@ -138,10 +140,15 @@ class ReduceMean(ReduceOp):
|
||||
enabled = True
|
||||
|
||||
|
||||
class ReduceL1(ReduceOp):
|
||||
op = 'ReduceL1'
|
||||
op_type = 'ReduceL1'
|
||||
version = 'opset4'
|
||||
|
||||
class ReduceL2(ReduceOp):
|
||||
op = 'ReduceL2'
|
||||
op_type = None
|
||||
enabled = True
|
||||
op_type = 'ReduceL2'
|
||||
version = 'opset4'
|
||||
|
||||
|
||||
class ReduceAnd(ReduceOp):
|
||||
|
@ -28,38 +28,42 @@ nodes_attributes = {
|
||||
**regular_op_with_shaped_data('data', [1, 3, 224, 224], {'type': 'Parameter', 'value': None,
|
||||
'_out_port_data_type': {0: np.float32}}),
|
||||
**valued_const_with_data('axis', int64_array(0)),
|
||||
**regular_op_with_shaped_data('reduce_l2', None, {'op': 'ReduceL2', 'type': None, 'name': 'my_reduce_l2'}),
|
||||
**regular_op_with_shaped_data('reduce_lp', None, {'op': 'ReduceLp', 'type': None, 'name': 'my_reduce_lp'}),
|
||||
**regular_op_with_shaped_data('identity', None, {'op': 'Identity', 'name': 'identity'}),
|
||||
**result('output'),
|
||||
}
|
||||
|
||||
|
||||
@generator
|
||||
class TestCumSum(unittest.TestCase):
|
||||
class ReduceLpTest(unittest.TestCase):
|
||||
@generate(*[
|
||||
([3, 2, 2], [0], True),
|
||||
([3, 2, 2], [1], True),
|
||||
([3, 2, 2], [2], True),
|
||||
([3, 2, 2], [0], False),
|
||||
([3, 2, 2], [1], False),
|
||||
([3, 2, 2], [2], False),
|
||||
([3, 2, 2], [0], True, 1),
|
||||
([3, 2, 2], [0], True, 2),
|
||||
([3, 2, 2], [1], True, 2),
|
||||
([3, 2, 2], [2], True, 2),
|
||||
([3, 2, 2], [0], False, 1),
|
||||
([3, 2, 2], [0], False, 2),
|
||||
([3, 2, 2], [1], False, 2),
|
||||
([3, 2, 2], [2], False, 2),
|
||||
])
|
||||
def test_reduce_l2(self, shape, axes, keepdims):
|
||||
def test_reduce_lp(self, shape, axes, keepdims, p):
|
||||
data = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
|
||||
reduced = np.sqrt(np.sum(a=np.square(data), axis=tuple(axes), keepdims=keepdims))
|
||||
reduced = np.power(np.sum(a=np.abs(np.power(data, p)), axis=tuple(axes), keepdims=keepdims), 1 / p)
|
||||
axis = int64_array(axes)
|
||||
p = int64_array(p)
|
||||
graph = build_graph(nodes_attributes,
|
||||
[*connect('data', '0:reduce_l2'),
|
||||
*connect('axis', '1:reduce_l2'),
|
||||
*connect('reduce_l2', '0:identity'),
|
||||
[*connect('data', '0:reduce_lp'),
|
||||
*connect('axis', '1:reduce_lp'),
|
||||
*connect('reduce_lp', '0:identity'),
|
||||
('identity', 'identity_d', {'out': 0}),
|
||||
('identity_d', 'output')
|
||||
],
|
||||
{'data_d': {'value': data, 'shape': data.shape},
|
||||
'axis_d': {'value': axis, 'shape': axis.shape},
|
||||
'reduce_l2': {'keep_dims': keepdims}},
|
||||
'reduce_lp': {'keep_dims': keepdims}},
|
||||
nodes_with_edges_only=True)
|
||||
|
||||
reduce_node = Node(graph, 'reduce_l2')
|
||||
reduce_node = Node(graph, 'reduce_lp')
|
||||
reduce_node.op = reduce_node.type = 'ReduceL' + str(p)
|
||||
reduce_infer(reduce_node)
|
||||
self.assertTrue(np.array_equal(reduce_node.out_port(0).data.get_value(), reduced))
|
||||
|
60
ngraph/core/include/ngraph/op/reduce_l1.hpp
Normal file
60
ngraph/core/include/ngraph/op/reduce_l1.hpp
Normal file
@ -0,0 +1,60 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "ngraph/op/util/arithmetic_reductions_keep_dims.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v4
|
||||
{
|
||||
/// \brief Reduction operation using L1 norm: L1(x) = sum(abs(x)) if all dimensions are
|
||||
/// specified for the normalisation.
|
||||
///
|
||||
/// Reduces the tensor, eliminating the specified reduction axes by taking the L1-norm.
|
||||
class NGRAPH_API ReduceL1 : public util::ArithmeticReductionKeepDims
|
||||
{
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"ReduceL1", 4};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a reducet L1-norm operation.
|
||||
ReduceL1() = default;
|
||||
/// \brief Constructs a reduce L1-norm operation.
|
||||
///
|
||||
/// \param arg The tensor to be reduced.
|
||||
/// \param reduction_axes The axis positions (0-based) to be eliminated.
|
||||
/// \param p The scalar defining the order of normalization.
|
||||
/// \param keep_dims If set to true it holds axes that are used for reduction.
|
||||
ReduceL1(const Output<Node>& arg,
|
||||
const Output<Node>& reduction_axes,
|
||||
bool keep_dims = false);
|
||||
|
||||
size_t get_version() const override { return 4; }
|
||||
/// \return The default value for Reduce.
|
||||
virtual std::shared_ptr<Node> get_default_value() const override;
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
58
ngraph/core/include/ngraph/op/reduce_l2.hpp
Normal file
58
ngraph/core/include/ngraph/op/reduce_l2.hpp
Normal file
@ -0,0 +1,58 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "ngraph/op/util/arithmetic_reductions_keep_dims.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v4
|
||||
{
|
||||
/// \brief Reduction operation using L2 norm:
|
||||
///
|
||||
/// Reduces the tensor, eliminating the specified reduction axes by taking the L2-norm.
|
||||
class NGRAPH_API ReduceL2 : public util::ArithmeticReductionKeepDims
|
||||
{
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"ReduceL2", 4};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a reducet L2-norm operation.
|
||||
ReduceL2() = default;
|
||||
/// \brief Constructs a reduce L2-norm operation.
|
||||
///
|
||||
/// \param arg The tensor to be reduced.
|
||||
/// \param reduction_axes The axis positions (0-based) to be eliminated.
|
||||
/// \param keep_dims If set to true it holds axes that are used for reduction.
|
||||
ReduceL2(const Output<Node>& arg,
|
||||
const Output<Node>& reduction_axes,
|
||||
bool keep_dims = false);
|
||||
|
||||
size_t get_version() const override { return 4; }
|
||||
/// \return The default value for Reduce.
|
||||
virtual std::shared_ptr<Node> get_default_value() const override;
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
@ -115,6 +115,8 @@
|
||||
#include "ngraph/op/quantized_dot.hpp"
|
||||
#include "ngraph/op/range.hpp"
|
||||
#include "ngraph/op/read_value.hpp"
|
||||
#include "ngraph/op/reduce_l1.hpp"
|
||||
#include "ngraph/op/reduce_l2.hpp"
|
||||
#include "ngraph/op/reduce_logical_and.hpp"
|
||||
#include "ngraph/op/reduce_logical_or.hpp"
|
||||
#include "ngraph/op/reduce_mean.hpp"
|
||||
|
@ -158,4 +158,6 @@ NGRAPH_OP(Atanh, ngraph::op::v3)
|
||||
NGRAPH_OP(CTCLoss, ngraph::op::v4)
|
||||
NGRAPH_OP(NonMaxSuppression, ngraph::op::v4)
|
||||
NGRAPH_OP(Mish, ngraph::op::v4)
|
||||
NGRAPH_OP(ReduceL1, ngraph::op::v4)
|
||||
NGRAPH_OP(ReduceL2, ngraph::op::v4)
|
||||
NGRAPH_OP(Swish, ngraph::op::v4)
|
||||
|
@ -30,10 +30,10 @@ namespace ngraph
|
||||
static inline void any(const char* arg,
|
||||
char* out,
|
||||
const Shape& in_shape,
|
||||
const Shape& out_shape,
|
||||
const AxisSet& reduction_axes)
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
CoordinateTransform output_transform(reduce(in_shape, reduction_axes, keep_dims));
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
@ -44,7 +44,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
out[output_transform.index(output_coord)] =
|
||||
out[output_transform.index(output_coord)] ||
|
||||
arg[input_transform.index(input_coord)];
|
||||
|
@ -333,7 +333,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
Coordinate arg1_coord = reduce(output_coord, arg1_squeezed_axes);
|
||||
Coordinate arg1_coord = reduce(output_coord, arg1_squeezed_axes, false);
|
||||
out[output_transform.index(output_coord)] =
|
||||
elementwise_functor(arg0[arg0_transform.index(output_coord)],
|
||||
arg1[arg1_transform.index(arg1_coord)]);
|
||||
@ -452,9 +452,9 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
Coordinate arg0_coord = reduce(output_coord, arg0_squeezed_axes);
|
||||
Coordinate arg1_coord = reduce(output_coord, arg1_squeezed_axes);
|
||||
Coordinate arg2_coord = reduce(output_coord, arg2_squeezed_axes);
|
||||
Coordinate arg0_coord = reduce(output_coord, arg0_squeezed_axes, false);
|
||||
Coordinate arg1_coord = reduce(output_coord, arg1_squeezed_axes, false);
|
||||
Coordinate arg2_coord = reduce(output_coord, arg2_squeezed_axes, false);
|
||||
out[output_transform.index(output_coord)] =
|
||||
elementwise_functor(arg0[arg0_transform.index(arg0_coord)],
|
||||
arg1[arg1_transform.index(arg1_coord)],
|
||||
@ -536,8 +536,8 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
Coordinate arg0_coord = reduce(output_coord, arg0_squeezed_axes);
|
||||
Coordinate arg2_coord = reduce(output_coord, arg2_squeezed_axes);
|
||||
Coordinate arg0_coord = reduce(output_coord, arg0_squeezed_axes, false);
|
||||
Coordinate arg2_coord = reduce(output_coord, arg2_squeezed_axes, false);
|
||||
out[output_transform.index(output_coord)] =
|
||||
elementwise_functor(arg0[arg0_transform.index(arg0_coord)],
|
||||
arg1[arg1_transform.index(output_coord)],
|
||||
|
@ -58,7 +58,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
Coordinate input_coord = reduce(output_coord, adjusted_axes);
|
||||
Coordinate input_coord = reduce(output_coord, adjusted_axes, false);
|
||||
out[output_transform.index(output_coord)] =
|
||||
arg[input_transform.index(input_coord)];
|
||||
}
|
||||
|
@ -26,33 +26,16 @@ namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace
|
||||
{
|
||||
Shape get_shape_no_keep_dims(const AxisSet& reduction_axes, const Shape& input_shape)
|
||||
{
|
||||
Shape shape_no_keep_dims;
|
||||
|
||||
for (size_t i = 0; i < input_shape.size(); i++)
|
||||
{
|
||||
if (reduction_axes.count(i) == 0)
|
||||
{
|
||||
shape_no_keep_dims.push_back(input_shape[i]);
|
||||
}
|
||||
}
|
||||
|
||||
return shape_no_keep_dims;
|
||||
}
|
||||
}
|
||||
|
||||
namespace reference
|
||||
{
|
||||
static inline void reduce_logical_and(const char* arg,
|
||||
char* out,
|
||||
const Shape& input_shape,
|
||||
const AxisSet& reduction_axes)
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
CoordinateTransform output_transform(
|
||||
get_shape_no_keep_dims(reduction_axes, input_shape));
|
||||
reduce(input_shape, reduction_axes, keep_dims));
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
@ -63,7 +46,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
out[output_transform.index(output_coord)] =
|
||||
out[output_transform.index(output_coord)] &&
|
||||
arg[input_transform.index(input_coord)];
|
||||
@ -73,13 +56,10 @@ namespace ngraph
|
||||
static inline void reduce_logical_or(const char* arg,
|
||||
char* out,
|
||||
const Shape& input_shape,
|
||||
const AxisSet& reduction_axes)
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
runtime::reference::any(arg,
|
||||
out,
|
||||
input_shape,
|
||||
get_shape_no_keep_dims(reduction_axes, input_shape),
|
||||
reduction_axes);
|
||||
runtime::reference::any(arg, out, input_shape, reduction_axes, keep_dims);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -29,13 +29,17 @@ namespace ngraph
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void max(const T* arg, T* out, const Shape& in_shape, const AxisSet& reduction_axes)
|
||||
void max(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
T minval = std::numeric_limits<T>::has_infinity
|
||||
? T(-std::numeric_limits<T>::infinity())
|
||||
: std::numeric_limits<T>::min();
|
||||
|
||||
auto out_shape = reduce(in_shape, reduction_axes);
|
||||
auto out_shape = reduce(in_shape, reduction_axes, keep_dims);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
@ -47,7 +51,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
|
||||
T x = arg[input_transform.index(input_coord)];
|
||||
T max = out[output_transform.index(output_coord)];
|
||||
|
@ -33,9 +33,13 @@ namespace ngraph
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void mean(const T* arg, T* out, const Shape& in_shape, const AxisSet& reduction_axes)
|
||||
void mean(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
auto out_shape = reduce(in_shape, reduction_axes);
|
||||
auto out_shape = reduce(in_shape, reduction_axes, keep_dims);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
std::vector<T> cs(shape_size(out_shape));
|
||||
|
||||
@ -50,7 +54,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
|
||||
T x = arg[input_transform.index(input_coord)];
|
||||
T& z = out[output_transform.index(output_coord)];
|
||||
|
@ -38,7 +38,7 @@ namespace ngraph
|
||||
T minval = std::numeric_limits<T>::has_infinity ? std::numeric_limits<T>::infinity()
|
||||
: std::numeric_limits<T>::max();
|
||||
|
||||
auto out_shape = reduce(in_shape, reduction_axes);
|
||||
auto out_shape = reduce(in_shape, reduction_axes, false);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
@ -50,7 +50,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, false);
|
||||
|
||||
T x = arg[input_transform.index(input_coord)];
|
||||
T min = out[output_transform.index(output_coord)];
|
||||
|
@ -28,9 +28,13 @@ namespace ngraph
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void product(const T* arg, T* out, const Shape& in_shape, const AxisSet& reduction_axes)
|
||||
void product(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
auto out_shape = reduce(in_shape, reduction_axes);
|
||||
auto out_shape = reduce(in_shape, reduction_axes, keep_dims);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
@ -42,7 +46,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
|
||||
size_t output_index = output_transform.index(output_coord);
|
||||
|
||||
|
59
ngraph/core/include/ngraph/runtime/reference/reduce_l1.hpp
Normal file
59
ngraph/core/include/ngraph/runtime/reference/reduce_l1.hpp
Normal file
@ -0,0 +1,59 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <cmath>
|
||||
|
||||
#include "ngraph/coordinate_transform.hpp"
|
||||
#include "ngraph/shape_util.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void reduce_l1(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
auto out_shape = reduce(in_shape, reduction_axes, keep_dims);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
out[output_transform.index(output_coord)] = 0;
|
||||
}
|
||||
|
||||
CoordinateTransform input_transform(in_shape);
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
|
||||
size_t output_index = output_transform.index(output_coord);
|
||||
|
||||
out[output_index] =
|
||||
out[output_index] + abs(arg[input_transform.index(input_coord)]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
65
ngraph/core/include/ngraph/runtime/reference/reduce_l2.hpp
Normal file
65
ngraph/core/include/ngraph/runtime/reference/reduce_l2.hpp
Normal file
@ -0,0 +1,65 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <cmath>
|
||||
|
||||
#include "ngraph/coordinate_transform.hpp"
|
||||
#include "ngraph/shape_util.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void reduce_l2(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
auto out_shape = reduce(in_shape, reduction_axes, keep_dims);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
out[output_transform.index(output_coord)] = 0;
|
||||
}
|
||||
|
||||
CoordinateTransform input_transform(in_shape);
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
|
||||
size_t output_index = output_transform.index(output_coord);
|
||||
|
||||
out[output_index] = out[output_index] +
|
||||
arg[input_transform.index(input_coord)] *
|
||||
arg[input_transform.index(input_coord)];
|
||||
}
|
||||
for (const Coordinate& output_coord : output_transform)
|
||||
{
|
||||
out[output_transform.index(output_coord)] =
|
||||
sqrt(out[output_transform.index(output_coord)]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -31,26 +31,26 @@ namespace ngraph
|
||||
template <typename T>
|
||||
void softmax(const T* arg, T* out, const Shape& shape, const AxisSet& axes)
|
||||
{
|
||||
auto temp_shape = reduce(shape, axes);
|
||||
auto temp_shape = reduce(shape, axes, true);
|
||||
auto temp_elements = shape_size(temp_shape);
|
||||
auto temp_ptr = new T[temp_elements];
|
||||
|
||||
max(arg, temp_ptr, shape, axes);
|
||||
max(arg, temp_ptr, shape, axes, true);
|
||||
|
||||
CoordinateTransform transform(shape);
|
||||
CoordinateTransform temp_transform(temp_shape);
|
||||
for (const Coordinate& coord : transform)
|
||||
{
|
||||
Coordinate temp_coord = reduce(coord, axes);
|
||||
Coordinate temp_coord = reduce(coord, axes, true);
|
||||
out[transform.index(coord)] = std::exp(
|
||||
arg[transform.index(coord)] - temp_ptr[temp_transform.index(temp_coord)]);
|
||||
}
|
||||
|
||||
sum(out, temp_ptr, shape, axes);
|
||||
sum(out, temp_ptr, shape, axes, true);
|
||||
|
||||
for (const Coordinate& coord : transform)
|
||||
{
|
||||
Coordinate temp_coord = reduce(coord, axes);
|
||||
Coordinate temp_coord = reduce(coord, axes, true);
|
||||
out[transform.index(coord)] /= temp_ptr[temp_transform.index(temp_coord)];
|
||||
}
|
||||
|
||||
|
@ -53,9 +53,13 @@ namespace ngraph
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
void sum(const T* arg, T* out, const Shape& in_shape, const AxisSet& reduction_axes)
|
||||
void sum(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
const AxisSet& reduction_axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
auto out_shape = reduce(in_shape, reduction_axes);
|
||||
auto out_shape = reduce(in_shape, reduction_axes, keep_dims);
|
||||
CoordinateTransform output_transform(out_shape);
|
||||
std::vector<T> cs(shape_size(out_shape));
|
||||
|
||||
@ -69,7 +73,7 @@ namespace ngraph
|
||||
|
||||
for (const Coordinate& input_coord : input_transform)
|
||||
{
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes);
|
||||
Coordinate output_coord = reduce(input_coord, reduction_axes, keep_dims);
|
||||
|
||||
T x = arg[input_transform.index(input_coord)];
|
||||
T& z = out[output_transform.index(output_coord)];
|
||||
|
@ -41,23 +41,30 @@ namespace ngraph
|
||||
|
||||
// Removes some values from a vector of axis values
|
||||
template <typename AXIS_VALUES>
|
||||
AXIS_VALUES reduce(const AXIS_VALUES& axis_values, const AxisSet& deleted_axes)
|
||||
AXIS_VALUES reduce(const AXIS_VALUES& axis_values, const AxisSet& deleted_axes, bool keep_dims)
|
||||
{
|
||||
AxisSet axes;
|
||||
AXIS_VALUES result;
|
||||
|
||||
for (size_t i = 0; i < axis_values.size(); i++)
|
||||
{
|
||||
if (deleted_axes.find(i) == deleted_axes.end())
|
||||
{
|
||||
axes.insert(i);
|
||||
result.push_back(axis_values[i]);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (keep_dims)
|
||||
result.push_back(1);
|
||||
}
|
||||
}
|
||||
|
||||
return project(axis_values, axes);
|
||||
return result;
|
||||
}
|
||||
|
||||
template <>
|
||||
NGRAPH_API PartialShape reduce(const PartialShape& shape, const AxisSet& deleted_axes);
|
||||
NGRAPH_API PartialShape reduce(const PartialShape& shape,
|
||||
const AxisSet& deleted_axes,
|
||||
bool keep_dims);
|
||||
|
||||
// TODO: check validity, i.e. that the new axis indices are all less than
|
||||
// axis_values.size()+num_new_axes.
|
||||
|
@ -89,30 +89,36 @@ shared_ptr<Node> op::v0::Max::get_default_value() const
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, false));
|
||||
runtime::reference::max(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_max(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate_max(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes);
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes);
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u32)(arg, out, axes);
|
||||
TYPE_CASE(u32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u64)(arg, out, axes);
|
||||
TYPE_CASE(u64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes);
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes);
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
@ -123,7 +129,7 @@ namespace
|
||||
bool op::v0::Max::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Max::evaluate");
|
||||
return evaluate_max(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_max(inputs[0], outputs[0], get_reduction_axes(), false);
|
||||
}
|
||||
|
||||
constexpr NodeTypeInfo op::v1::ReduceMax::type_info;
|
||||
@ -146,5 +152,5 @@ bool op::v1::ReduceMax::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::ReduceMax::evaluate");
|
||||
return evaluate_max(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_max(inputs[0], outputs[0], get_reduction_axes(), get_keep_dims());
|
||||
}
|
||||
|
@ -91,7 +91,7 @@ namespace
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, false));
|
||||
runtime::reference::min(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
return true;
|
||||
|
@ -52,30 +52,36 @@ shared_ptr<Node> op::v0::Product::get_default_value() const
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, keep_dims));
|
||||
runtime::reference::product(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_product(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate_product(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes);
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes);
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u32)(arg, out, axes);
|
||||
TYPE_CASE(u32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u64)(arg, out, axes);
|
||||
TYPE_CASE(u64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes);
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes);
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
@ -87,5 +93,5 @@ bool op::v0::Product::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Product::evaluate");
|
||||
return evaluate_product(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_product(inputs[0], outputs[0], get_reduction_axes(), false);
|
||||
}
|
||||
|
91
ngraph/core/src/op/reduce_l1.cpp
Normal file
91
ngraph/core/src/op/reduce_l1.cpp
Normal file
@ -0,0 +1,91 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include "ngraph/op/reduce_l1.hpp"
|
||||
#include "ngraph/graph_util.hpp"
|
||||
#include "ngraph/itt.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
#include "ngraph/runtime/reference/reduce_l1.hpp"
|
||||
#include "ngraph/shape_util.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
constexpr NodeTypeInfo op::v4::ReduceL1::type_info;
|
||||
|
||||
op::v4::ReduceL1::ReduceL1(const Output<Node>& arg,
|
||||
const Output<Node>& reduction_axes,
|
||||
bool keep_dims)
|
||||
: ArithmeticReductionKeepDims(arg, reduction_axes, keep_dims)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v4::ReduceL1::get_default_value() const
|
||||
{
|
||||
return ngraph::make_constant_from_string("0", get_element_type(), get_shape());
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v4::ReduceL1::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v4::ReduceL1>(new_args.at(0), new_args.at(1), get_keep_dims());
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes, keep_dims));
|
||||
runtime::reference::reduce_l1(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_sum(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(bf16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v4::ReduceL1::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v4::ReduceL1::evaluate");
|
||||
return evaluate_sum(inputs[0], outputs[0], get_reduction_axes(), get_keep_dims());
|
||||
}
|
87
ngraph/core/src/op/reduce_l2.cpp
Normal file
87
ngraph/core/src/op/reduce_l2.cpp
Normal file
@ -0,0 +1,87 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include "ngraph/op/reduce_l2.hpp"
|
||||
#include "ngraph/graph_util.hpp"
|
||||
#include "ngraph/itt.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
#include "ngraph/runtime/reference/reduce_l2.hpp"
|
||||
#include "ngraph/shape_util.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
constexpr NodeTypeInfo op::v4::ReduceL2::type_info;
|
||||
|
||||
op::v4::ReduceL2::ReduceL2(const Output<Node>& arg,
|
||||
const Output<Node>& reduction_axes,
|
||||
bool keep_dims)
|
||||
: ArithmeticReductionKeepDims(arg, reduction_axes, keep_dims)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v4::ReduceL2::get_default_value() const
|
||||
{
|
||||
return ngraph::make_constant_from_string("0", get_element_type(), get_shape());
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v4::ReduceL2::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v4::ReduceL2>(new_args.at(0), new_args.at(1), get_keep_dims());
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes, keep_dims));
|
||||
runtime::reference::reduce_l2(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_reduce_l2(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(bf16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v4::ReduceL2::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v4::ReduceL2::evaluate");
|
||||
return evaluate_reduce_l2(inputs[0], outputs[0], get_reduction_axes(), get_keep_dims());
|
||||
}
|
@ -44,7 +44,8 @@ namespace
|
||||
{
|
||||
bool evaluate_reduce_logical_and(const HostTensorPtr& data,
|
||||
const HostTensorPtr& axes,
|
||||
const HostTensorPtr& out)
|
||||
const HostTensorPtr& out,
|
||||
bool keep_dims)
|
||||
{
|
||||
try
|
||||
{
|
||||
@ -53,7 +54,8 @@ namespace
|
||||
runtime::reference::reduce_logical_and(data->get_data_ptr<char>(),
|
||||
out->get_data_ptr<char>(),
|
||||
data->get_shape(),
|
||||
reduction_axes);
|
||||
reduction_axes,
|
||||
keep_dims);
|
||||
|
||||
return true;
|
||||
}
|
||||
@ -80,6 +82,6 @@ bool op::v1::ReduceLogicalAnd::evaluate(const HostTensorVector& outputs,
|
||||
}
|
||||
else
|
||||
{
|
||||
return evaluate_reduce_logical_and(data, axes, out);
|
||||
return evaluate_reduce_logical_and(data, axes, out, get_keep_dims());
|
||||
}
|
||||
}
|
||||
|
@ -44,7 +44,8 @@ namespace
|
||||
{
|
||||
bool evaluate_reduce_logical_or(const HostTensorPtr& data,
|
||||
const HostTensorPtr& axes,
|
||||
const HostTensorPtr& out)
|
||||
const HostTensorPtr& out,
|
||||
bool keep_dims)
|
||||
{
|
||||
try
|
||||
{
|
||||
@ -53,7 +54,8 @@ namespace
|
||||
runtime::reference::reduce_logical_or(data->get_data_ptr<char>(),
|
||||
out->get_data_ptr<char>(),
|
||||
data->get_shape(),
|
||||
reduction_axes);
|
||||
reduction_axes,
|
||||
keep_dims);
|
||||
|
||||
return true;
|
||||
}
|
||||
@ -80,6 +82,6 @@ bool op::v1::ReduceLogicalOr::evaluate(const HostTensorVector& outputs,
|
||||
}
|
||||
else
|
||||
{
|
||||
return evaluate_reduce_logical_or(data, axes, out);
|
||||
return evaluate_reduce_logical_or(data, axes, out, get_keep_dims());
|
||||
}
|
||||
}
|
||||
|
@ -44,30 +44,36 @@ shared_ptr<Node> op::v1::ReduceMean::clone_with_new_inputs(const OutputVector& n
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, keep_dims));
|
||||
runtime::reference::mean(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_mean(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate_mean(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes);
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes);
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u32)(arg, out, axes);
|
||||
TYPE_CASE(u32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u64)(arg, out, axes);
|
||||
TYPE_CASE(u64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes);
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes);
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
@ -79,5 +85,5 @@ bool op::v1::ReduceMean::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::ReduceMean::evaluate");
|
||||
return evaluate_mean(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_mean(inputs[0], outputs[0], get_reduction_axes(), get_keep_dims());
|
||||
}
|
||||
|
@ -48,30 +48,36 @@ shared_ptr<Node> op::v1::ReduceProd::clone_with_new_inputs(const OutputVector& n
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, keep_dims));
|
||||
runtime::reference::product(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_product(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate_product(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes);
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes);
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u32)(arg, out, axes);
|
||||
TYPE_CASE(u32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u64)(arg, out, axes);
|
||||
TYPE_CASE(u64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes);
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes);
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
@ -83,5 +89,5 @@ bool op::v1::ReduceProd::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::ReduceProd::evaluate");
|
||||
return evaluate_product(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_product(inputs[0], outputs[0], get_reduction_axes(), get_keep_dims());
|
||||
}
|
||||
|
@ -49,30 +49,36 @@ shared_ptr<Node> op::v1::ReduceSum::clone_with_new_inputs(const OutputVector& ne
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, keep_dims));
|
||||
runtime::reference::sum(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_sum(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate_sum(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes);
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes);
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u32)(arg, out, axes);
|
||||
TYPE_CASE(u32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u64)(arg, out, axes);
|
||||
TYPE_CASE(u64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes);
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes);
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
@ -84,5 +90,5 @@ bool op::v1::ReduceSum::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::ReduceSum::evaluate");
|
||||
return evaluate_sum(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_sum(inputs[0], outputs[0], get_reduction_axes(), get_keep_dims());
|
||||
}
|
||||
|
@ -53,30 +53,36 @@ shared_ptr<Node> op::v0::Sum::get_default_value() const
|
||||
namespace
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
bool evaluate(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
out->set_shape(reduce(arg->get_shape(), axes));
|
||||
out->set_shape(reduce(arg->get_shape(), axes, false));
|
||||
runtime::reference::sum(
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes);
|
||||
arg->get_data_ptr<ET>(), out->get_data_ptr<ET>(), arg->get_shape(), axes, keep_dims);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool evaluate_sum(const HostTensorPtr& arg, const HostTensorPtr& out, const AxisSet& axes)
|
||||
bool evaluate_sum(const HostTensorPtr& arg,
|
||||
const HostTensorPtr& out,
|
||||
const AxisSet& axes,
|
||||
bool keep_dims)
|
||||
{
|
||||
bool rc = true;
|
||||
switch (arg->get_element_type())
|
||||
{
|
||||
TYPE_CASE(i32)(arg, out, axes);
|
||||
TYPE_CASE(i32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(i64)(arg, out, axes);
|
||||
TYPE_CASE(i64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u32)(arg, out, axes);
|
||||
TYPE_CASE(u32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(u64)(arg, out, axes);
|
||||
TYPE_CASE(u64)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f16)(arg, out, axes);
|
||||
TYPE_CASE(f16)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
TYPE_CASE(f32)(arg, out, axes);
|
||||
TYPE_CASE(f32)(arg, out, axes, keep_dims);
|
||||
break;
|
||||
default: rc = false; break;
|
||||
}
|
||||
@ -87,5 +93,5 @@ namespace
|
||||
bool op::v0::Sum::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Sum::evaluate");
|
||||
return evaluate_sum(inputs[0], outputs[0], get_reduction_axes());
|
||||
return evaluate_sum(inputs[0], outputs[0], get_reduction_axes(), false);
|
||||
}
|
||||
|
@ -46,14 +46,16 @@ static shared_ptr<op::Constant>
|
||||
runtime::reference::max<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
max->get_reduction_axes());
|
||||
max->get_reduction_axes(),
|
||||
false);
|
||||
}
|
||||
else if (auto reduce_max = as_type_ptr<op::v1::ReduceMax>(reduction_node))
|
||||
{
|
||||
runtime::reference::max<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
reduce_max->get_reduction_axes());
|
||||
reduce_max->get_reduction_axes(),
|
||||
reduce_max->get_keep_dims());
|
||||
}
|
||||
else if (auto min = as_type_ptr<op::Min>(reduction_node))
|
||||
{
|
||||
@ -74,35 +76,40 @@ static shared_ptr<op::Constant>
|
||||
runtime::reference::product<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
prod->get_reduction_axes());
|
||||
prod->get_reduction_axes(),
|
||||
false);
|
||||
}
|
||||
else if (auto reduce_prod = as_type_ptr<op::v1::ReduceProd>(reduction_node))
|
||||
{
|
||||
runtime::reference::product<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
reduce_prod->get_reduction_axes());
|
||||
reduce_prod->get_reduction_axes(),
|
||||
reduce_prod->get_keep_dims());
|
||||
}
|
||||
else if (auto sum = as_type_ptr<op::Sum>(reduction_node))
|
||||
{
|
||||
runtime::reference::sum<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
sum->get_reduction_axes());
|
||||
sum->get_reduction_axes(),
|
||||
false);
|
||||
}
|
||||
else if (auto reduce_sum = as_type_ptr<op::v1::ReduceSum>(reduction_node))
|
||||
{
|
||||
runtime::reference::sum<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
reduce_sum->get_reduction_axes());
|
||||
reduce_sum->get_reduction_axes(),
|
||||
reduce_sum->get_keep_dims());
|
||||
}
|
||||
else if (auto reduce_mean = as_type_ptr<op::v1::ReduceMean>(reduction_node))
|
||||
{
|
||||
runtime::reference::mean<T>(constant->get_data_ptr<T>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
reduce_mean->get_reduction_axes());
|
||||
reduce_mean->get_reduction_axes(),
|
||||
reduce_mean->get_keep_dims());
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -34,9 +34,9 @@ static shared_ptr<op::Constant> fold_constant_logical_reduction(shared_ptr<op::C
|
||||
{
|
||||
runtime::reference::any(constant->get_data_ptr<char>(),
|
||||
data_ptr,
|
||||
constant->get_output_shape(0),
|
||||
reduction_node->get_shape(),
|
||||
any->get_reduction_axes());
|
||||
reduction_node->get_input_shape(0),
|
||||
any->get_reduction_axes(),
|
||||
false);
|
||||
}
|
||||
else if (auto reduce_and = as_type_ptr<::ngraph::op::v1::ReduceLogicalAnd>(reduction_node))
|
||||
{
|
||||
@ -44,7 +44,8 @@ static shared_ptr<op::Constant> fold_constant_logical_reduction(shared_ptr<op::C
|
||||
const auto input_shape = reduce_and->get_input_shape(0);
|
||||
const char* arg = constant->get_data_ptr<char>();
|
||||
|
||||
runtime::reference::reduce_logical_and(arg, data_ptr, input_shape, reduction_axes);
|
||||
runtime::reference::reduce_logical_and(
|
||||
arg, data_ptr, input_shape, reduction_axes, reduce_and->get_keep_dims());
|
||||
}
|
||||
else if (auto reduce_or = as_type_ptr<::ngraph::op::v1::ReduceLogicalOr>(reduction_node))
|
||||
{
|
||||
@ -52,7 +53,8 @@ static shared_ptr<op::Constant> fold_constant_logical_reduction(shared_ptr<op::C
|
||||
const auto input_shape = reduce_or->get_input_shape(0);
|
||||
const char* arg = constant->get_data_ptr<char>();
|
||||
|
||||
runtime::reference::reduce_logical_or(arg, data_ptr, input_shape, reduction_axes);
|
||||
runtime::reference::reduce_logical_or(
|
||||
arg, data_ptr, input_shape, reduction_axes, reduce_or->get_keep_dims());
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -44,7 +44,7 @@ PartialShape ngraph::project(const PartialShape& shape, const AxisSet& axes)
|
||||
}
|
||||
|
||||
template <>
|
||||
PartialShape ngraph::reduce(const PartialShape& shape, const AxisSet& deleted_axes)
|
||||
PartialShape ngraph::reduce(const PartialShape& shape, const AxisSet& deleted_axes, bool keep_dims)
|
||||
{
|
||||
if (shape.rank().is_dynamic())
|
||||
{
|
||||
@ -52,17 +52,22 @@ PartialShape ngraph::reduce(const PartialShape& shape, const AxisSet& deleted_ax
|
||||
}
|
||||
else
|
||||
{
|
||||
AxisSet axes;
|
||||
std::vector<Dimension> result_dims;
|
||||
|
||||
for (size_t i = 0; i < shape.rank().get_length(); i++)
|
||||
{
|
||||
if (deleted_axes.find(i) == deleted_axes.end())
|
||||
{
|
||||
axes.insert(i);
|
||||
result_dims.push_back(shape[i]);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (keep_dims)
|
||||
result_dims.push_back(1);
|
||||
}
|
||||
}
|
||||
|
||||
return project(shape, axes);
|
||||
return result_dims;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -119,6 +119,8 @@ from ngraph.opset4 import psroi_pooling
|
||||
from ngraph.opset4 import proposal
|
||||
from ngraph.opset4 import range
|
||||
from ngraph.opset4 import read_value
|
||||
from ngraph.opset4 import reduce_l1
|
||||
from ngraph.opset4 import reduce_l2
|
||||
from ngraph.opset4 import reduce_logical_and
|
||||
from ngraph.opset4 import reduce_logical_or
|
||||
from ngraph.opset4 import reduce_max
|
||||
|
@ -107,6 +107,8 @@ from ngraph.opset1.ops import psroi_pooling
|
||||
from ngraph.opset4.ops import proposal
|
||||
from ngraph.opset1.ops import range
|
||||
from ngraph.opset3.ops import read_value
|
||||
from ngraph.opset4.ops import reduce_l1
|
||||
from ngraph.opset4.ops import reduce_l2
|
||||
from ngraph.opset1.ops import reduce_logical_and
|
||||
from ngraph.opset1.ops import reduce_logical_or
|
||||
from ngraph.opset1.ops import reduce_max
|
||||
|
@ -313,3 +313,37 @@ def proposal(
|
||||
return _get_node_factory_opset4().create(
|
||||
"Proposal", [class_probs, bbox_deltas, as_node(image_shape)], attrs
|
||||
)
|
||||
|
||||
|
||||
@nameable_op
|
||||
def reduce_l1(
|
||||
node: NodeInput, reduction_axes: NodeInput, keep_dims: bool = False, name: Optional[str] = None
|
||||
) -> Node:
|
||||
"""L1-reduction operation on input tensor, eliminating the specified reduction axes.
|
||||
|
||||
:param node: The tensor we want to mean-reduce.
|
||||
:param reduction_axes: The axes to eliminate through mean operation.
|
||||
:param keep_dims: If set to True it holds axes that are used for reduction
|
||||
:param name: Optional name for output node.
|
||||
:return: The new node performing mean-reduction operation.
|
||||
"""
|
||||
return _get_node_factory_opset4().create(
|
||||
"ReduceL1", as_nodes(node, reduction_axes), {"keep_dims": keep_dims}
|
||||
)
|
||||
|
||||
|
||||
@nameable_op
|
||||
def reduce_l2(
|
||||
node: NodeInput, reduction_axes: NodeInput, keep_dims: bool = False, name: Optional[str] = None
|
||||
) -> Node:
|
||||
"""L2-reduction operation on input tensor, eliminating the specified reduction axes.
|
||||
|
||||
:param node: The tensor we want to mean-reduce.
|
||||
:param reduction_axes: The axes to eliminate through mean operation.
|
||||
:param keep_dims: If set to True it holds axes that are used for reduction
|
||||
:param name: Optional name for output node.
|
||||
:return: The new node performing mean-reduction operation.
|
||||
"""
|
||||
return _get_node_factory_opset4().create(
|
||||
"ReduceL2", as_nodes(node, reduction_axes), {"keep_dims": keep_dims}
|
||||
)
|
||||
|
@ -73,6 +73,8 @@ set(SRC
|
||||
op_eval/matmul.cpp
|
||||
op_eval/mish.cpp
|
||||
op_eval/non_zero.cpp
|
||||
op_eval/reduce_l1.cpp
|
||||
op_eval/reduce_l2.cpp
|
||||
op_eval/split.cpp
|
||||
op_eval/strided_slice.cpp
|
||||
op_eval/variadic_split.cpp
|
||||
@ -145,6 +147,8 @@ set(SRC
|
||||
type_prop/quantized_dot.cpp
|
||||
type_prop/range.cpp
|
||||
type_prop/read_value.cpp
|
||||
type_prop/reduce_l1.cpp
|
||||
type_prop/reduce_l2.cpp
|
||||
type_prop/replace_slice.cpp
|
||||
type_prop/reshape.cpp
|
||||
type_prop/reverse.cpp
|
||||
|
71
ngraph/test/op_eval/reduce_l1.cpp
Normal file
71
ngraph/test/op_eval/reduce_l1.cpp
Normal file
@ -0,0 +1,71 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
#include "gtest/gtest.h"
|
||||
|
||||
#include "ngraph/opsets/opset4.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
#include "ngraph/validation_util.hpp"
|
||||
#include "runtime/backend.hpp"
|
||||
#include "util/test_tools.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
TEST(op_eval, reduce_l1_one_axis_keep_dims)
|
||||
{
|
||||
auto data = make_shared<opset4::Parameter>(element::f32, Shape{3, 2, 2});
|
||||
auto axes = opset4::Constant::create(element::i32, Shape{1}, {2});
|
||||
auto reduce = make_shared<opset4::ReduceL1>(data, axes, true);
|
||||
auto fun = make_shared<Function>(OutputVector{reduce}, ParameterVector{data});
|
||||
|
||||
std::vector<float> inputs{1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0};
|
||||
std::vector<float> expected_result{3.0, 7.0, 11.0, 15.0, 19.0, 23.0};
|
||||
|
||||
auto result = make_shared<HostTensor>();
|
||||
ASSERT_TRUE(fun->evaluate({result},
|
||||
{make_host_tensor<element::Type_t::f32>(Shape{3, 2, 2}, inputs),
|
||||
make_host_tensor<element::Type_t::i32>(Shape{1}, {2})}));
|
||||
EXPECT_EQ(result->get_element_type(), element::f32);
|
||||
EXPECT_EQ(result->get_shape(), Shape{std::vector<size_t>({3, 2, 1})});
|
||||
auto result_data = read_vector<float>(result);
|
||||
for (auto i = 0; i < expected_result.size(); i++)
|
||||
EXPECT_NEAR(result_data[i], expected_result[i], 0.000001);
|
||||
}
|
||||
|
||||
TEST(op_eval, reduce_l1_one_axis_do_not_keep_dims)
|
||||
{
|
||||
auto data = make_shared<opset4::Parameter>(element::f32, Shape{3, 2, 2});
|
||||
auto axes = opset4::Constant::create(element::i32, Shape{1}, {2});
|
||||
auto reduce = make_shared<opset4::ReduceL1>(data, axes, false);
|
||||
auto fun = make_shared<Function>(OutputVector{reduce}, ParameterVector{data});
|
||||
|
||||
std::vector<float> inputs{1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0};
|
||||
std::vector<float> expected_result{3.0, 7.0, 11.0, 15.0, 19.0, 23.0};
|
||||
|
||||
auto result = make_shared<HostTensor>();
|
||||
ASSERT_TRUE(fun->evaluate({result},
|
||||
{make_host_tensor<element::Type_t::f32>(Shape{3, 2, 2}, inputs),
|
||||
make_host_tensor<element::Type_t::i32>(Shape{1}, {2})}));
|
||||
EXPECT_EQ(result->get_element_type(), element::f32);
|
||||
EXPECT_EQ(result->get_shape(), Shape{std::vector<size_t>({3, 2})});
|
||||
auto result_data = read_vector<float>(result);
|
||||
for (auto i = 0; i < expected_result.size(); i++)
|
||||
EXPECT_NEAR(result_data[i], expected_result[i], 0.000001);
|
||||
}
|
73
ngraph/test/op_eval/reduce_l2.cpp
Normal file
73
ngraph/test/op_eval/reduce_l2.cpp
Normal file
@ -0,0 +1,73 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
#include "gtest/gtest.h"
|
||||
|
||||
#include "ngraph/opsets/opset4.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
#include "ngraph/validation_util.hpp"
|
||||
#include "runtime/backend.hpp"
|
||||
#include "util/test_tools.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
TEST(op_eval, reduce_l2_one_axis_keep_dims)
|
||||
{
|
||||
auto data = make_shared<opset4::Parameter>(element::f32, Shape{3, 2, 2});
|
||||
auto axes = opset4::Constant::create(element::i32, Shape{1}, {2});
|
||||
auto reduce = make_shared<op::v4::ReduceL2>(data, axes, true);
|
||||
auto fun = make_shared<Function>(OutputVector{reduce}, ParameterVector{data});
|
||||
|
||||
std::vector<float> inputs{1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0};
|
||||
std::vector<float> expected_result{
|
||||
2.23606798, 5.0, 7.81024968, 10.63014581, 13.45362405, 16.2788206};
|
||||
|
||||
auto result = make_shared<HostTensor>();
|
||||
ASSERT_TRUE(fun->evaluate({result},
|
||||
{make_host_tensor<element::Type_t::f32>(Shape{3, 2, 2}, inputs),
|
||||
make_host_tensor<element::Type_t::i32>(Shape{1}, {2})}));
|
||||
EXPECT_EQ(result->get_element_type(), element::f32);
|
||||
EXPECT_EQ(result->get_shape(), Shape{std::vector<size_t>({3, 2, 1})});
|
||||
auto result_data = read_vector<float>(result);
|
||||
for (auto i = 0; i < expected_result.size(); i++)
|
||||
EXPECT_NEAR(result_data[i], expected_result[i], 0.000001);
|
||||
}
|
||||
|
||||
TEST(op_eval, reduce_l2_one_axis_do_not_keep_dims)
|
||||
{
|
||||
auto data = make_shared<opset4::Parameter>(element::f32, Shape{3, 2, 2});
|
||||
auto axes = opset4::Constant::create(element::i32, Shape{1}, {2});
|
||||
auto reduce = make_shared<op::v4::ReduceL2>(data, axes, false);
|
||||
auto fun = make_shared<Function>(OutputVector{reduce}, ParameterVector{data});
|
||||
|
||||
std::vector<float> inputs{1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0};
|
||||
std::vector<float> expected_result{
|
||||
2.23606798, 5.0, 7.81024968, 10.63014581, 13.45362405, 16.2788206};
|
||||
|
||||
auto result = make_shared<HostTensor>();
|
||||
ASSERT_TRUE(fun->evaluate({result},
|
||||
{make_host_tensor<element::Type_t::f32>(Shape{3, 2, 2}, inputs),
|
||||
make_host_tensor<element::Type_t::i32>(Shape{1}, {2})}));
|
||||
EXPECT_EQ(result->get_element_type(), element::f32);
|
||||
EXPECT_EQ(result->get_shape(), Shape{std::vector<size_t>({3, 2})});
|
||||
auto result_data = read_vector<float>(result);
|
||||
for (auto i = 0; i < expected_result.size(); i++)
|
||||
EXPECT_NEAR(result_data[i], expected_result[i], 0.000001);
|
||||
}
|
@ -783,7 +783,7 @@ TEST(partial_shape, partial_shape_project_rank_static_dynamic)
|
||||
TEST(partial_shape, partial_shape_reduce_rank_dynamic)
|
||||
{
|
||||
PartialShape s1{PartialShape::dynamic()};
|
||||
PartialShape s2 = reduce(s1, AxisSet{284, 0, 103});
|
||||
PartialShape s2 = reduce(s1, AxisSet{284, 0, 103}, false);
|
||||
|
||||
ASSERT_TRUE(s2.rank().is_dynamic());
|
||||
}
|
||||
@ -791,7 +791,7 @@ TEST(partial_shape, partial_shape_reduce_rank_dynamic)
|
||||
TEST(partial_shape, partial_shape_reduce_rank_static_dynamic)
|
||||
{
|
||||
PartialShape s1{Dimension::dynamic(), 2, Dimension::dynamic(), 3};
|
||||
PartialShape s2 = reduce(s1, AxisSet{0, 3});
|
||||
PartialShape s2 = reduce(s1, AxisSet{0, 3}, false);
|
||||
|
||||
ASSERT_TRUE(s2.same_scheme(PartialShape{2, Dimension::dynamic()}));
|
||||
}
|
||||
|
@ -204,8 +204,8 @@ protected:
|
||||
reference::any(args[0]->get_data_ptr<const char>(),
|
||||
out[0]->get_data_ptr<char>(),
|
||||
node.get_input_shape(0),
|
||||
node.get_output_shape(0),
|
||||
any->get_reduction_axes());
|
||||
any->get_reduction_axes(),
|
||||
false);
|
||||
break;
|
||||
}
|
||||
case OP_TYPEID::Asin:
|
||||
|
60
ngraph/test/type_prop/reduce_l1.cpp
Normal file
60
ngraph/test/type_prop/reduce_l1.cpp
Normal file
@ -0,0 +1,60 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include "gtest/gtest.h"
|
||||
#include "ngraph/ngraph.hpp"
|
||||
#include "util/type_prop.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
TEST(type_prop, reduce_l1_v4_axis_out_of_range)
|
||||
{
|
||||
auto arg = make_shared<op::Parameter>(element::f32, Shape{1, 2, 3});
|
||||
auto axes = make_shared<op::Constant>(element::i64, Shape{2}, vector<int64_t>{2, 3});
|
||||
try
|
||||
{
|
||||
auto reduce_sum = make_shared<op::v4::ReduceL1>(arg, axes);
|
||||
// Should have thrown, so fail if it didn't
|
||||
FAIL() << "Incorrect axes values exception not thrown";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(), std::string("Reduction axis ("));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Deduced type check failed for unexpected reason";
|
||||
}
|
||||
}
|
||||
|
||||
TEST(type_prop, reduce_l1_v4_shape_if_keep_dims)
|
||||
{
|
||||
auto arg = make_shared<op::Parameter>(element::f32, Shape{3, 4, 5});
|
||||
auto axes = make_shared<op::Constant>(element::i64, Shape{2}, vector<int64_t>{1, 2});
|
||||
auto keep_dims = true;
|
||||
auto reduce_prod = make_shared<op::v4::ReduceL1>(arg, axes, keep_dims);
|
||||
ASSERT_TRUE(reduce_prod->get_output_partial_shape(0).compatible(PartialShape{3, 1, 1}));
|
||||
}
|
||||
|
||||
TEST(type_prop, reduce_l1_v4_shape_if_not_keep_dims)
|
||||
{
|
||||
auto arg = make_shared<op::Parameter>(element::f32, Shape{3, 4, 5});
|
||||
auto axes = make_shared<op::Constant>(element::i64, Shape{2}, vector<int64_t>{1, 2});
|
||||
auto keep_dims = false;
|
||||
auto reduce_prod = make_shared<op::v4::ReduceL1>(arg, axes, keep_dims);
|
||||
ASSERT_TRUE(reduce_prod->get_output_partial_shape(0).compatible(PartialShape{3}));
|
||||
}
|
60
ngraph/test/type_prop/reduce_l2.cpp
Normal file
60
ngraph/test/type_prop/reduce_l2.cpp
Normal file
@ -0,0 +1,60 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#include "gtest/gtest.h"
|
||||
#include "ngraph/ngraph.hpp"
|
||||
#include "util/type_prop.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
TEST(type_prop, reduce_l2_v4_axis_out_of_range)
|
||||
{
|
||||
auto arg = make_shared<op::Parameter>(element::f32, Shape{1, 2, 3});
|
||||
auto axes = make_shared<op::Constant>(element::i64, Shape{2}, vector<int64_t>{2, 3});
|
||||
try
|
||||
{
|
||||
auto reduce_sum = make_shared<op::v4::ReduceL2>(arg, axes);
|
||||
// Should have thrown, so fail if it didn't
|
||||
FAIL() << "Incorrect axes values exception not thrown";
|
||||
}
|
||||
catch (const NodeValidationFailure& error)
|
||||
{
|
||||
EXPECT_HAS_SUBSTRING(error.what(), std::string("Reduction axis ("));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
FAIL() << "Deduced type check failed for unexpected reason";
|
||||
}
|
||||
}
|
||||
|
||||
TEST(type_prop, reduce_l2_v4_shape_if_keep_dims)
|
||||
{
|
||||
auto arg = make_shared<op::Parameter>(element::f32, Shape{3, 4, 5});
|
||||
auto axes = make_shared<op::Constant>(element::i64, Shape{2}, vector<int64_t>{1, 2});
|
||||
auto keep_dims = true;
|
||||
auto reduce_prod = make_shared<op::v4::ReduceL2>(arg, axes, keep_dims);
|
||||
ASSERT_TRUE(reduce_prod->get_output_partial_shape(0).compatible(PartialShape{3, 1, 1}));
|
||||
}
|
||||
|
||||
TEST(type_prop, reduce_l2_v4_shape_if_not_keep_dims)
|
||||
{
|
||||
auto arg = make_shared<op::Parameter>(element::f32, Shape{3, 4, 5});
|
||||
auto axes = make_shared<op::Constant>(element::i64, Shape{2}, vector<int64_t>{1, 2});
|
||||
auto keep_dims = false;
|
||||
auto reduce_prod = make_shared<op::v4::ReduceL2>(arg, axes, keep_dims);
|
||||
ASSERT_TRUE(reduce_prod->get_output_partial_shape(0).compatible(PartialShape{3}));
|
||||
}
|
Loading…
Reference in New Issue
Block a user