diff --git a/.ci/azure/linux.yml b/.ci/azure/linux.yml
index 22673819b1c..4ed0a79a285 100644
--- a/.ci/azure/linux.yml
+++ b/.ci/azure/linux.yml
@@ -88,7 +88,7 @@ jobs:
rm -rf $(BUILD_SAMPLES_DIR) ; mkdir $(BUILD_SAMPLES_DIR)
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
sudo mkdir -p $(SHARE_DIR)
- sudo apt --assume-yes install nfs-common
+ sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(SHARE_DIR) -o vers=4,minorversion=1,sec=sys
mkdir -p $(CCACHE_DIR)
displayName: 'Make dir'
diff --git a/docs/nGraph_DG/nGraphTransformation.md b/docs/nGraph_DG/nGraphTransformation.md
index 03777180ad8..524dbf59c6e 100644
--- a/docs/nGraph_DG/nGraphTransformation.md
+++ b/docs/nGraph_DG/nGraphTransformation.md
@@ -47,7 +47,7 @@ For examples of how to build an nGraph function, see the [Build nGraph Function]
## Transformations types
-nGraph has three main transformation types:
+nGraph has three main transformation types:
* `ngraph::pass::FunctionPass` - straightforward way to work with `ngraph::Function` directly
* `ngraph::pass::MatcherPass` - pattern-based transformation approach
@@ -81,7 +81,7 @@ Template for MatcherPass transformation class
To use `ngraph::pass::MatcherPass`, you need to complete these steps:
1. Create a pattern
-2. Implement a callback
+2. Implement a callback
3. Register the pattern and Matcher
4. Execute MatcherPass
@@ -90,7 +90,7 @@ So let's go through each of these steps.
### Create a pattern
Pattern is a single root `ngraph::Function`. But the only difference is that you do not need to create a function object, you just need to create and connect opset or special pattern operations.
Then you need to take the last created operation and put it as a root of the pattern. This root node will be used as a root node in pattern matching.
-> **NOTE**: Any nodes in a pattern that have no consumers and are not registered as root will not be used in pattern matching.
+> **NOTE**: Any nodes in a pattern that have no consumers and are not registered as root will not be used in pattern matching.
@snippet example_ngraph_utils.cpp pattern:simple_example
@@ -105,7 +105,7 @@ Callback is an action applied to every pattern entrance. In general, callback is
The example above shows the callback structure and how Matcher can be used for accessing nodes detected by pattern.
Callback return value is `true` if root node was replaced and another pattern cannot be applied to the same root node; otherwise, it is `false`.
-> **NOTE**: It is not recommended to manipulate with nodes that are under root node. This may affect GraphRewrite execution as it is expected that all nodes that come after root node in topological order are valid and can be used in pattern matching.
+> **NOTE**: It is not recommended to manipulate with nodes that are under root node. This may affect GraphRewrite execution as it is expected that all nodes that come after root node in topological order are valid and can be used in pattern matching.
MatcherPass also provides functionality that allows reporting of the newly created nodes that can be used in additional pattern matching.
If MatcherPass was registered in `pass::Manager` or `pass::GraphRewrite`, these registered nodes will be added for additional pattern matching.
@@ -144,7 +144,7 @@ Example:
In addition, GraphRewrite handles nodes that were registered by MatcherPasses during their execution. This nodes will be added to the beginning of the sequence with nodes for pattern matching.
-> **NOTE**: when using `pass::Manager` temporary GraphRewrite is used to execute single MatcherPass.
+> **NOTE**: when using `pass::Manager` temporary GraphRewrite is used to execute single MatcherPass.
GraphRewrite has two algorithms for MatcherPasses execution. First algorithm is straightforward. It applies each MatcherPass in registration order to current node.
@@ -153,7 +153,7 @@ GraphRewrite has two algorithms for MatcherPasses execution. First algorithm is
But it is not really efficient when you have a lot of registered passes. So first of all GraphRewrite checks that all MatcherPass patterns has type-based root node (it means that type of this node is not hidden into predicate).
And then creates map from registered MatcherPasses. That helps to avoid additional cost of applying each MatcherPass for each node.
-![graph_rewrite_efficient_search]
+![graph_rewrite_efficient_search]
> **NOTE**: GraphRewrite execution algorithm cannot be set manually and depends only on root nodes registered inside MatcherPasses.
@@ -161,7 +161,7 @@ And then creates map from registered MatcherPasses. That helps to avoid addition
Sometimes patterns cannot be expressed via regular nGraph operations or it is too complicated.
For example, if you want to detect Convolution->Add sub-graph without specifying particular input type for Convolution operation or you want to create a pattern where some of operations can have different types.
-And for these cases nGraph provides additional helpers to construct patterns for GraphRewrite transformations.
+And for these cases nGraph provides additional helpers to construct patterns for GraphRewrite transformations.
There are two main helpers:
1. `ngraph::pattern::any_input` - helps to express inputs if their types are undefined.
@@ -172,7 +172,7 @@ Let's go through the example to have better understanding of how it works:
> **NOTE**: Node attributes do not participate in pattern matching and are needed only for operations creation. Only operation types participate in pattern matching.
The example below shows basic usage of `pattern::any_input`.
-Here we construct Multiply pattern with arbitrary first input and Constant as a second input.
+Here we construct Multiply pattern with arbitrary first input and Constant as a second input.
Also as Multiply is commutative operation, it does not matter in which order we set inputs (any_input/Constant or Constant/any_input) because both cases will be matched.
@snippet example_ngraph_utils.cpp pattern:label_example
@@ -196,7 +196,7 @@ In this chapter we will review nGraph API that allows us to manipulate with `ngr
First of all let's talk about `ngraph::Node` input/output ports. Each nGraph operation has input and output ports except cases when operation has `Result`, `Parameter`, or `Constant` type.
Every port belongs to its node, so using a port we can access parent node, get shape and type for particular input/output, get all consumers in case of output port, and get producer node in case of input port.
-With output port we can set inputs for newly created operations.
+With output port we can set inputs for newly created operations.
Lets look at the code example.
@@ -208,8 +208,8 @@ std::shared_ptr neg_const = opset1::Constant::create(sub->get_input_elemen
Output data = node->input_value(0);
auto neg = std::make_shared(data, neg_const);
```
-In this example, the `opset3::Multiply` operation takes `Output` and `std::shared_ptr` as inputs. But the constructor takes both as `Output`.
-In this case, `std::shared_ptr` will be automatically converted to `Output` if node has exactly one output port; otherwise, conversion raises an exception.
+In this example, the `opset3::Multiply` operation takes `Output` and `std::shared_ptr` as inputs. But the constructor takes both as `Output`.
+In this case, `std::shared_ptr` will be automatically converted to `Output` if node has exactly one output port; otherwise, conversion raises an exception.
### ngraph::Node replacement
@@ -251,9 +251,9 @@ To eliminate operation, nGraph has special method that considers all limitations
@snippet example_ngraph_utils.cpp ngraph:eliminate_node
`replace_output_update_name` in case of successful replacement it automatically preserves friendly name and runtime info.
-
-## Transformation conditional compilation
+
+## Transformation conditional compilation
Transformation library has two internal macros to support conditional compilation feature.
@@ -272,14 +272,14 @@ Use the latest version of OpSet in your transformation. An exception is op_conve
###2. Dynamic Shape and Rank
-nGraph has two types for shape representation:
+nGraph has two types for shape representation:
`ngraph::Shape` - represents static shape.
`ngraph::PartialShape` - represents dynamic shape. It means that rank or some of dimensions are dynamic (undefined).
`ngraph::PartialShape` can be converted to `ngraph::Shape` using the `get_shape()` method if all dimensions are static; otherwise, conversion raises an exception.
@snippet example_ngraph_utils.cpp ngraph:shape
-But in most cases before getting static shape using `get_shape()` method, you need to check that shape is static.
+But in most cases before getting static shape using `get_shape()` method, you need to check that shape is static.
Also if your transformation requires only input shape rank or particular dimension value, please do not use the `get_shape()` method. See the example below demonstrating how to avoid using `get_shape()`
@@ -289,7 +289,7 @@ Not using `get_shape()` method makes your transformation more flexible and appli
###3. Friendly Names
-Each `ngraph::Node` has a unique name (used for nGraph internals) and a friendly name. In transformations we care only about friendly name because it represents the name from intermediate representation (IR).
+Each `ngraph::Node` has a unique name (used for nGraph internals) and a friendly name. In transformations we care only about friendly name because it represents the name from intermediate representation (IR).
Also friendly name is used as output tensor name (until we do not have other way to represent output tensor name) and user code that requests intermediate outputs based on these names.
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
@@ -306,7 +306,7 @@ In more advanced cases, when replaced operation has several outputs and we add a
###4. Runtime Info
-Runtime info is a map `std::map>` located inside `ngraph::Node` class. It represents additional attributes in `ngraph::Node`.
+Runtime info is a map `std::map` located inside `ngraph::Node` class. It represents additional attributes in `ngraph::Node`.
These attributes can be set by users or by plugins and when executing transformation that changes `ngraph::Function` we need to preserve these attributes as they will not be automatically propagated.
In most cases, transformations have the following types: 1:1 (replace node with another node), 1:N (replace node with a sub-graph), N:1 (fuse sub-graph into a single node), N:M (any other transformation).
Currently, there is no mechanism that automatically detects transformation types, so we need to propagate this runtime information manually. See the examples below.
@@ -331,7 +331,7 @@ ngraph::copy_runtime_info({conv, bias}, {conv_ie});
ngraph::copy_runtime_info({a, b, c}, {e, f});
```
-When transformation has multiple fusions or decompositions, `ngraph::copy_runtime_info` must be called multiple times for each case.
+When transformation has multiple fusions or decompositions, `ngraph::copy_runtime_info` must be called multiple times for each case.
> **Note**: copy_runtime_info removes rt_info from destination nodes. If you want to keep it, you need to specify them in source nodes like this: copy_runtime_info({a, b, c}, {a, b})
@@ -341,12 +341,12 @@ If your transformation inserts constant sub-graphs that need to be folded, do no
The example below shows how constant subgraph can be constructed.
```cpp
-// After ConstantFolding pass Power will be replaced with Constant
+// After ConstantFolding pass Power will be replaced with Constant
auto pow = std::make_shared(
opset3::Constant::create(element::f32, Shape{1}, {2})
opset3::Constant::create(element::f32, Shape{1}, {3}));
auto mul = std::make_shared(input /* not constant input */, pow);
-```
+```
Manual constant folding is more preferable than `ngraph::pass::ConstantFolding()` because it is much faster.
@@ -358,18 +358,18 @@ Below you can find an example of manual constant folding:
In transformation development process:
-* Do not use deprecated nGraph API. Deprecated methods has the `NGRAPH_DEPRECATED` macros in its definition.
+* Do not use deprecated nGraph API. Deprecated methods has the `NGRAPH_DEPRECATED` macros in its definition.
* Do not pass `shared_ptr` as an input for other node if type of node is unknown or it has multiple outputs. Use explicit output port.
* If you replace node with another node that produces different shape, remember that new shape will not be propagated until the first `validate_nodes_and_infer_types` call for `ngraph::Function`. If you are using `pass::Manager`, it will automatically call this method after each transformation execution.
* Do not forget to call the `ngraph::ConstantFolding` pass if your transformation creates constant subgraphs.
* Use latest OpSet if you are not developing downgrade transformation pass.
-* When developing a callback for `ngraph::pass::MatcherPass`, do not change nodes that come after the root node in topological order.
+* When developing a callback for `ngraph::pass::MatcherPass`, do not change nodes that come after the root node in topological order.
## Using pass manager
`ngraph::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
It can register and apply any [transformation types](#transformations_types) on function.
-In addition, `ngraph::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how_to_debug_transformations) section).
+In addition, `ngraph::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how_to_debug_transformations) section).
The example below shows basic usage of `ngraph::pass::Manager`
diff --git a/docs/snippets/InferenceEngine_Caching3.cpp b/docs/snippets/InferenceEngine_Caching3.cpp
index 282d07b1dc9..db6cd89e5c6 100644
--- a/docs/snippets/InferenceEngine_Caching3.cpp
+++ b/docs/snippets/InferenceEngine_Caching3.cpp
@@ -14,7 +14,7 @@ using namespace InferenceEngine;
auto it = std::find(keys.begin(), keys.end(), METRIC_KEY(IMPORT_EXPORT_SUPPORT));
// If metric 'IMPORT_EXPORT_SUPPORT' exists, check it's value
- bool cachingSupported = (it != keys.end()) && ie.GetMetric(deviceName, METRIC_KEY(IMPORT_EXPORT_SUPPORT)).as();
+ auto cachingSupported = (it != keys.end()) && ie.GetMetric(deviceName, METRIC_KEY(IMPORT_EXPORT_SUPPORT)).as();
//! [part3]
return 0;
}
diff --git a/docs/template_plugin/tests/functional/op_reference/detection_output.cpp b/docs/template_plugin/tests/functional/op_reference/detection_output.cpp
index fa05c887089..cc0a6a2a7f4 100644
--- a/docs/template_plugin/tests/functional/op_reference/detection_output.cpp
+++ b/docs/template_plugin/tests/functional/op_reference/detection_output.cpp
@@ -45,21 +45,21 @@ struct DetectionOutputParams {
refData(CreateTensor(iType, oValues)),
testcaseName(test_name) {
attrs.num_classes = num_classes;
- attrs.background_label_id = background_label_id;
- attrs.top_k = top_k;
- attrs.variance_encoded_in_target = variance_encoded_in_target;
- attrs.keep_top_k = keep_top_k;
- attrs.code_type = code_type;
- attrs.share_location = share_location;
- attrs.nms_threshold = nms_threshold;
- attrs.confidence_threshold = confidence_threshold;
- attrs.clip_after_nms = clip_after_nms;
- attrs.clip_before_nms = clip_before_nms;
- attrs.decrease_label_id = decrease_label_id;
- attrs.normalized = normalized;
- attrs.input_height = input_height;
- attrs.input_width = input_width;
- attrs.objectness_score = objectness_score;
+ attrs_v8.background_label_id = attrs.background_label_id = background_label_id;
+ attrs_v8.top_k = attrs.top_k = top_k;
+ attrs_v8.variance_encoded_in_target = attrs.variance_encoded_in_target = variance_encoded_in_target;
+ attrs_v8.keep_top_k = attrs.keep_top_k = keep_top_k;
+ attrs_v8.code_type = attrs.code_type = code_type;
+ attrs_v8.share_location = attrs.share_location = share_location;
+ attrs_v8.nms_threshold = attrs.nms_threshold = nms_threshold;
+ attrs_v8.confidence_threshold = attrs.confidence_threshold = confidence_threshold;
+ attrs_v8.clip_after_nms = attrs.clip_after_nms = clip_after_nms;
+ attrs_v8.clip_before_nms = attrs.clip_before_nms = clip_before_nms;
+ attrs_v8.decrease_label_id = attrs.decrease_label_id = decrease_label_id;
+ attrs_v8.normalized = attrs.normalized = normalized;
+ attrs_v8.input_height = attrs.input_height = input_height;
+ attrs_v8.input_width = attrs.input_width = input_width;
+ attrs_v8.objectness_score = attrs.objectness_score = objectness_score;
size_t num_loc_classes = attrs.share_location ? 1 : attrs.num_classes;
size_t prior_box_size = attrs.normalized ? 4 : 5;
@@ -107,21 +107,21 @@ template
auxConfData(CreateTensor(iType, auxConfValues)),
testcaseName(test_name) {
attrs.num_classes = num_classes;
- attrs.background_label_id = background_label_id;
- attrs.top_k = top_k;
- attrs.variance_encoded_in_target = variance_encoded_in_target;
- attrs.keep_top_k = keep_top_k;
- attrs.code_type = code_type;
- attrs.share_location = share_location;
- attrs.nms_threshold = nms_threshold;
- attrs.confidence_threshold = confidence_threshold;
- attrs.clip_after_nms = clip_after_nms;
- attrs.clip_before_nms = clip_before_nms;
- attrs.decrease_label_id = decrease_label_id;
- attrs.normalized = normalized;
- attrs.input_height = input_height;
- attrs.input_width = input_width;
- attrs.objectness_score = objectness_score;
+ attrs_v8.background_label_id = attrs.background_label_id = background_label_id;
+ attrs_v8.top_k = attrs.top_k = top_k;
+ attrs_v8.variance_encoded_in_target = attrs.variance_encoded_in_target = variance_encoded_in_target;
+ attrs_v8.keep_top_k = attrs.keep_top_k = keep_top_k;
+ attrs_v8.code_type = attrs.code_type = code_type;
+ attrs_v8.share_location = attrs.share_location = share_location;
+ attrs_v8.nms_threshold = attrs.nms_threshold = nms_threshold;
+ attrs_v8.confidence_threshold = attrs.confidence_threshold = confidence_threshold;
+ attrs_v8.clip_after_nms = attrs.clip_after_nms = clip_after_nms;
+ attrs_v8.clip_before_nms = attrs.clip_before_nms = clip_before_nms;
+ attrs_v8.decrease_label_id = attrs.decrease_label_id = decrease_label_id;
+ attrs_v8.normalized = attrs.normalized = normalized;
+ attrs_v8.input_height = attrs.input_height = input_height;
+ attrs_v8.input_width = attrs.input_width = input_width;
+ attrs_v8.objectness_score = attrs.objectness_score = objectness_score;
size_t num_loc_classes = attrs.share_location ? 1 : attrs.num_classes;
size_t prior_box_size = attrs.normalized ? 4 : 5;
@@ -135,6 +135,7 @@ template
}
ov::op::v0::DetectionOutput::Attributes attrs;
+ ov::op::v8::DetectionOutput::Attributes attrs_v8;
ov::PartialShape locShape;
ov::PartialShape confShape;
ov::PartialShape priorBoxesShape;
@@ -194,10 +195,61 @@ private:
}
};
+class ReferenceDetectionOutputV8LayerTest : public testing::TestWithParam,
+ public CommonReferenceTest {
+public:
+ void SetUp() override {
+ auto params = GetParam();
+ function = CreateFunction(params);
+ if ((params.auxLocShape.size() != 0) && (params.auxConfShape.size() != 0))
+ inputData = {params.locData, params.confData, params.priorBoxesData, params.auxConfData, params.auxLocData};
+ else
+ inputData = {params.locData, params.confData, params.priorBoxesData};
+ refOutData = {params.refData};
+ }
+ static std::string getTestCaseName(const testing::TestParamInfo& obj) {
+ auto param = obj.param;
+ std::ostringstream result;
+ result << "locShape=" << param.locShape << "_";
+ result << "confShape=" << param.confShape << "_";
+ result << "priorBoxesShape=" << param.priorBoxesShape << "_";
+ if ((param.auxLocShape.size() != 0) && (param.auxConfShape.size() != 0)) {
+ result << "auxLocShape=" << param.locShape << "_";
+ result << "auxConfShape=" << param.confShape << "_";
+ }
+ result << "iType=" << param.inType;
+ if (param.testcaseName != "")
+ result << "_" << param.testcaseName;
+ return result.str();
+ }
+
+private:
+ static std::shared_ptr CreateFunction(const DetectionOutputParams& params) {
+ const auto loc = std::make_shared(params.inType, params.locShape);
+ const auto conf = std::make_shared(params.inType, params.confShape);
+ const auto priorBoxes = std::make_shared(params.inType, params.priorBoxesShape);
+ if ((params.auxLocShape.size() != 0) && (params.auxConfShape.size() != 0)) {
+ const auto auxConf = std::make_shared(params.inType, params.auxConfShape);
+ const auto auxLoc = std::make_shared(params.inType, params.auxLocShape);
+ const auto DetectionOutput =
+ std::make_shared(loc, conf, priorBoxes, auxConf, auxLoc, params.attrs_v8);
+ return std::make_shared(NodeVector{DetectionOutput},
+ ParameterVector{loc, conf, priorBoxes, auxConf, auxLoc});
+ } else {
+ const auto DetectionOutput = std::make_shared(loc, conf, priorBoxes, params.attrs_v8);
+ return std::make_shared(NodeVector{DetectionOutput}, ParameterVector{loc, conf, priorBoxes});
+ }
+ }
+};
+
TEST_P(ReferenceDetectionOutputLayerTest, CompareWithRefs) {
Exec();
}
+TEST_P(ReferenceDetectionOutputV8LayerTest, CompareWithRefs) {
+ Exec();
+}
+
template
std::vector generateDetectionOutputFloatParams() {
using T = typename element_type_traits::value_type;
@@ -517,4 +569,9 @@ std::vector generateDetectionOutputCombinedParams() {
INSTANTIATE_TEST_SUITE_P(smoke_DetectionOutput_With_Hardcoded_Refs, ReferenceDetectionOutputLayerTest,
testing::ValuesIn(generateDetectionOutputCombinedParams()), ReferenceDetectionOutputLayerTest::getTestCaseName);
+INSTANTIATE_TEST_SUITE_P(smoke_DetectionOutput_With_Hardcoded_Refs,
+ ReferenceDetectionOutputV8LayerTest,
+ testing::ValuesIn(generateDetectionOutputCombinedParams()),
+ ReferenceDetectionOutputV8LayerTest::getTestCaseName);
+
} // namespace
\ No newline at end of file
diff --git a/docs/template_plugin/tests/functional/shared_tests_instances/behavior/plugin/core_integration.cpp b/docs/template_plugin/tests/functional/shared_tests_instances/behavior/plugin/core_integration.cpp
index 78cf8bfaecf..83701ec9030 100644
--- a/docs/template_plugin/tests/functional/shared_tests_instances/behavior/plugin/core_integration.cpp
+++ b/docs/template_plugin/tests/functional/shared_tests_instances/behavior/plugin/core_integration.cpp
@@ -130,10 +130,10 @@ TEST_F(IEClassGetConfigTestTEMPLATE, smoke_GetConfigNoThrow) {
std::string defaultDeviceID = ie.GetConfig(deviceName, CONFIG_KEY(DEVICE_ID));
std::cout << CONFIG_KEY(DEVICE_ID) << " : " << defaultDeviceID << std::endl;
} else if (CONFIG_KEY(PERF_COUNT) == confKey) {
- bool defaultPerfCount = ie.GetConfig(deviceName, CONFIG_KEY(PERF_COUNT)).as();
+ auto defaultPerfCount = ie.GetConfig(deviceName, CONFIG_KEY(PERF_COUNT)).as();
std::cout << CONFIG_KEY(PERF_COUNT) << " : " << defaultPerfCount << std::endl;
} else if (CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS) == confKey) {
- bool defaultExclusive = ie.GetConfig(deviceName, CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS)).as();
+ auto defaultExclusive = ie.GetConfig(deviceName, CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS)).as();
std::cout << CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS) << " : " << defaultExclusive << std::endl;
}
}
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.cpp b/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.cpp
index d4daf57fd38..a08c640e4c1 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.cpp
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.cpp
@@ -94,12 +94,12 @@ MKLDNNDescriptor::operator std::shared_ptr() {
return typeDesc->getPtr();
}
-MKLDNNDescriptor::MKLDNNDescriptor(std::shared_ptr desc) {
- this->desc.reset(new DescFwdImpl(desc));
+MKLDNNDescriptor::MKLDNNDescriptor(std::shared_ptr desc) {
+ this->desc.reset(new DescFwdImpl(desc));
}
-MKLDNNDescriptor::operator std::shared_ptr() {
- auto typeDesc = std::dynamic_pointer_cast>(desc);
+MKLDNNDescriptor::operator std::shared_ptr() {
+ auto typeDesc = std::dynamic_pointer_cast>(desc);
if (typeDesc == nullptr) {
IE_THROW() << "Cannot cast descriptor!";
}
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.h b/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.h
index d02f9c3da70..d85d447ca6c 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.h
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_descriptor.h
@@ -28,8 +28,8 @@ public:
explicit MKLDNNDescriptor(std::shared_ptr desc);
operator std::shared_ptr();
- explicit MKLDNNDescriptor(std::shared_ptr desc);
- operator std::shared_ptr();
+ explicit MKLDNNDescriptor(std::shared_ptr desc);
+ operator std::shared_ptr();
explicit MKLDNNDescriptor(std::shared_ptr desc);
operator std::shared_ptr();
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_graph_dumper.cpp b/inference-engine/src/mkldnn_plugin/mkldnn_graph_dumper.cpp
index 4ffa1845f6c..aae2495bb2a 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_graph_dumper.cpp
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_graph_dumper.cpp
@@ -186,7 +186,7 @@ std::shared_ptr dump_graph_as_ie_ngraph_net(const MKLDNNGraph
}
for (auto && kvp : meta_data)
- return_node->get_rt_info()[kvp.first] = std::make_shared<::ngraph::VariantWrapper>(kvp.second);
+ return_node->get_rt_info()[kvp.first] = std::make_shared<::ov::RuntimeAttributeWrapper>(kvp.second);
return_node->set_friendly_name(node->getName());
return return_node;
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_infer_request.cpp b/inference-engine/src/mkldnn_plugin/mkldnn_infer_request.cpp
index 72efcfcfe37..ccd9c96e675 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_infer_request.cpp
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_infer_request.cpp
@@ -526,9 +526,12 @@ void MKLDNNPlugin::MKLDNNInferRequest::changeDefaultPtr() {
break;
}
- if (child->getType() == Concatenation && dynamic_cast(child.get())->isOptimized()) {
- canBeInPlace = false;
- break;
+ if (child->getType() == Concatenation) {
+ auto concat = dynamic_cast(child.get());
+ if (concat && concat->isOptimized()) {
+ canBeInPlace = false;
+ break;
+ }
}
// Cannot be in-place before split because split is using different ptrs without offsets
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp b/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
index 11fe2d4006b..03b2156e183 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
@@ -137,7 +137,7 @@ MKLDNNNode::MKLDNNNode(const std::shared_ptr& op, const mkldnn::en
}
if (op != nullptr) {
- std::string inputMemoryFormats = ngraph::getMLKDNNInputMemoryFormats(op);
+ std::string inputMemoryFormats = ngraph::getMKLDNNInputMemoryFormats(op);
if (!inputMemoryFormats.empty()) {
std::istringstream stream(inputMemoryFormats);
std::string str;
@@ -148,7 +148,7 @@ MKLDNNNode::MKLDNNNode(const std::shared_ptr& op, const mkldnn::en
}
}
- std::string outputMemoryFormats = ngraph::getMLKDNNOutputMemoryFormats(op);
+ std::string outputMemoryFormats = ngraph::getMKLDNNOutputMemoryFormats(op);
if (!outputMemoryFormats.empty()) {
std::istringstream stream(outputMemoryFormats);
std::string str;
@@ -162,7 +162,7 @@ MKLDNNNode::MKLDNNNode(const std::shared_ptr& op, const mkldnn::en
const auto it = rtInfo.find("enforceBF16evenForGraphTail");
if (it != rtInfo.end()) {
- if (const auto value = std::dynamic_pointer_cast>(it->second))
+ if (const auto value = std::dynamic_pointer_cast>(it->second))
enforceBF16evenForGraphTail = value->get();
}
}
diff --git a/inference-engine/src/mkldnn_plugin/ngraph_transformations/move_eltwise_up_data_movement.cpp b/inference-engine/src/mkldnn_plugin/ngraph_transformations/move_eltwise_up_data_movement.cpp
index 948a33d3951..ec76db9361f 100644
--- a/inference-engine/src/mkldnn_plugin/ngraph_transformations/move_eltwise_up_data_movement.cpp
+++ b/inference-engine/src/mkldnn_plugin/ngraph_transformations/move_eltwise_up_data_movement.cpp
@@ -85,7 +85,7 @@ MKLDNNPlugin::MoveEltwiseUpThroughDataMov::MoveEltwiseUpThroughDataMov() {
}
// eltwise constant shape should match new input shape
- if (is_binary_op && current->get_output_shape(0).size() != eltwise->get_input_shape(1).size()) {
+ if (is_binary_op && current->get_output_partial_shape(0).rank().get_length() != eltwise->get_input_partial_shape(1).rank().get_length()) {
auto old_eltwise_const = std::dynamic_pointer_cast(eltwise->get_input_node_shared_ptr(1));
auto new_constant = std::make_shared(*old_eltwise_const.get(), ngraph::Shape{});
ngraph::replace_node(old_eltwise_const, new_constant);
diff --git a/inference-engine/src/mkldnn_plugin/ngraph_transformations/rnn_sequences_optimization.cpp b/inference-engine/src/mkldnn_plugin/ngraph_transformations/rnn_sequences_optimization.cpp
index 0b81fdcc81e..8a6c9fbbe8e 100644
--- a/inference-engine/src/mkldnn_plugin/ngraph_transformations/rnn_sequences_optimization.cpp
+++ b/inference-engine/src/mkldnn_plugin/ngraph_transformations/rnn_sequences_optimization.cpp
@@ -67,7 +67,7 @@ namespace {
ngraph::replace_node(transposeAfter, {reshape2->output(0)});
}
- sequenceOp->get_rt_info()["seqAxis"] = std::make_shared>(seqAxis);
+ sequenceOp->get_rt_info()["seqAxis"] = std::make_shared>(seqAxis);
return true;
}
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_embedding_bag_offset_sum_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_embedding_bag_offset_sum_node.h
index a7b3cac7a2e..146003c0b41 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_embedding_bag_offset_sum_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_embedding_bag_offset_sum_node.h
@@ -39,8 +39,8 @@ private:
const int* offsetsData_ = nullptr;
const int* defaultIndices_ = nullptr;
- size_t _indicesLen;
- size_t _offsetsLen;
+ size_t _indicesLen = 0;
+ size_t _offsetsLen = 0;
};
} // namespace MKLDNNPlugin
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.cpp
index 708c6d91921..96d8c48be47 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.cpp
@@ -215,12 +215,16 @@ static void nms_cf(const float* conf_data,
detections = (post_nms_topn == -1 ? detections : (std::min)(post_nms_topn, detections));
}
+bool MKLDNNExperimentalDetectronDetectionOutputNode::needShapeInfer() const {
+ return false;
+}
+
+bool MKLDNNExperimentalDetectronDetectionOutputNode::needPrepareParams() const {
+ return false;
+}
+
bool MKLDNNExperimentalDetectronDetectionOutputNode::isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept {
try {
- if (isDynamicNgraphNode(op)) {
- errorMessage = "Doesn't support op with dynamic shapes";
- return false;
- }
const auto doOp = ngraph::as_type_ptr(op);
if (!doOp) {
errorMessage = "Node is not an instance of the ExperimentalDetectronDetectionOutput from the operations set v6.";
@@ -268,6 +272,12 @@ void MKLDNNExperimentalDetectronDetectionOutputNode::initSupportedPrimitiveDescr
impl_desc_type::ref_any);
}
+void MKLDNNExperimentalDetectronDetectionOutputNode::createPrimitive() {
+ if (inputShapesDefined()) {
+ updateLastInputDims();
+ }
+}
+
void MKLDNNExperimentalDetectronDetectionOutputNode::execute(mkldnn::stream strm) {
const int rois_num = getParentEdgeAt(INPUT_ROIS)->getMemory().getStaticDims()[0];
assert(classes_num_ == static_cast(getParentEdgeAt(INPUT_SCORES)->getMemory().getStaticDims()[1]));
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.h
index aac589b058f..3c73bd036bb 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_detection_output_node.h
@@ -15,10 +15,13 @@ public:
void getSupportedDescriptors() override {};
void initSupportedPrimitiveDescriptors() override;
- void createPrimitive() override {};
+ void createPrimitive() override;
void execute(mkldnn::stream strm) override;
bool created() const override;
+ bool needShapeInfer() const override;
+ bool needPrepareParams() const override;
+ void executeDynamicImpl(mkldnn::stream strm) override { execute(strm); }
static bool isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept;
private:
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.cpp
index 977493ed5be..fc36c163484 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.cpp
@@ -275,10 +275,6 @@ void fill_output_blobs(const float* proposals, const int* roi_indices,
bool MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::isSupportedOperation
(const std::shared_ptr& op, std::string& errorMessage) noexcept {
try {
- if (isDynamicNgraphNode(op)) {
- errorMessage = "Doesn't support op with dynamic shapes";
- return false;
- }
const auto proposalOp = ngraph::as_type_ptr(op);
if (!proposalOp) {
errorMessage = "Node is not an instance of the Proposal from the operations set v0.";
@@ -324,6 +320,12 @@ void MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::initSupportedP
impl_desc_type::ref_any);
}
+void MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::createPrimitive() {
+ if (inputShapesDefined()) {
+ updateLastInputDims();
+ }
+}
+
void MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::execute(mkldnn::stream strm) {
try {
if (inputShapes.size() != 4 || outputShapes.size() != 2) {
@@ -431,4 +433,12 @@ bool MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::created() cons
return getType() == ExperimentalDetectronGenerateProposalsSingleImage;
}
+bool MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::needShapeInfer() const {
+ return false;
+}
+
+bool MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode::needPrepareParams() const {
+ return false;
+}
+
REG_MKLDNN_PRIM_FOR(MKLDNNExperimentalDetectronGenerateProposalsSingleImageNode, ExperimentalDetectronGenerateProposalsSingleImage)
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.h
index 3caf61e168b..a18f41e5a94 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_generate_proposals_single_image_node.h
@@ -16,10 +16,13 @@ public:
void getSupportedDescriptors() override {};
void initSupportedPrimitiveDescriptors() override;
- void createPrimitive() override {};
+ void createPrimitive() override;
void execute(mkldnn::stream strm) override;
bool created() const override;
+ bool needShapeInfer() const override;
+ bool needPrepareParams() const override;
+ void executeDynamicImpl(mkldnn::stream strm) override { execute(strm); }
static bool isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept;
private:
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.cpp
index 10359d50949..a30031d9b84 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.cpp
@@ -14,10 +14,6 @@ using namespace InferenceEngine;
bool MKLDNNExperimentalDetectronPriorGridGeneratorNode::isSupportedOperation(const std::shared_ptr& op,
std::string& errorMessage) noexcept {
try {
- if (isDynamicNgraphNode(op)) {
- errorMessage = "Doesn't support op with dynamic shapes";
- return false;
- }
const auto priorGridGen = std::dynamic_pointer_cast(op);
if (!priorGridGen) {
errorMessage = "Only opset6 ExperimentalDetectronPriorGridGenerator operation is supported";
@@ -42,11 +38,6 @@ MKLDNNExperimentalDetectronPriorGridGeneratorNode::MKLDNNExperimentalDetectronPr
if (getOriginalInputsNumber() != 3 || getOriginalOutputsNumber() != 1)
IE_THROW() << errorPrefix << " has incorrect number of input/output edges!";
- if (op->get_input_shape(INPUT_PRIORS).size() != 2 ||
- op->get_input_shape(INPUT_FEATUREMAP).size() != 4 ||
- op->get_input_shape(INPUT_IMAGE).size() != 4)
- IE_THROW() << errorPrefix << " has unsupported input shape";
-
const auto &attr = priorGridGen->get_attrs();
grid_w_ = attr.w;
grid_h_ = attr.h;
@@ -65,6 +56,12 @@ void MKLDNNExperimentalDetectronPriorGridGeneratorNode::initSupportedPrimitiveDe
impl_desc_type::ref_any);
}
+void MKLDNNExperimentalDetectronPriorGridGeneratorNode::createPrimitive() {
+ if (inputShapesDefined()) {
+ updateLastInputDims();
+ }
+}
+
void MKLDNNExperimentalDetectronPriorGridGeneratorNode::execute(mkldnn::stream strm) {
const int num_priors_ = getParentEdgeAt(INPUT_PRIORS)->getMemory().getStaticDims()[0];
assert(getParentEdgeAt(INPUT_PRIORS)->getMemory().getStaticDims()[1] == 4);
@@ -95,4 +92,8 @@ bool MKLDNNExperimentalDetectronPriorGridGeneratorNode::created() const {
return getType() == ExperimentalDetectronPriorGridGenerator;
}
+bool MKLDNNExperimentalDetectronPriorGridGeneratorNode::needPrepareParams() const {
+ return false;
+}
+
REG_MKLDNN_PRIM_FOR(MKLDNNExperimentalDetectronPriorGridGeneratorNode, ExperimentalDetectronPriorGridGenerator)
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.h
index 2f7e224e63c..c908add3223 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_experimental_detectron_priorgridgenerator_node.h
@@ -15,10 +15,12 @@ public:
void getSupportedDescriptors() override {};
void initSupportedPrimitiveDescriptors() override;
- void createPrimitive() override {};
+ void createPrimitive() override;
void execute(mkldnn::stream strm) override;
bool created() const override;
+ bool needPrepareParams() const override;
+ void executeDynamicImpl(mkldnn::stream strm) override { execute(strm); }
static bool isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept;
private:
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_generic_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_generic_node.cpp
index 45ca5d7cf8e..930a73bc4ec 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_generic_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_generic_node.cpp
@@ -163,25 +163,10 @@ void MKLDNNGenericNode::execLayer() {
// TODO: use ngraph-based extension mechnism if needed to recompute shape
isDynBatch = false;
- // TODO: uncomment after using ngraph-based extension mechnism
- // if (isDynBatch) {
- // for (size_t i = 0; i < inputs.size(); i++) {
- // auto td = inputs[i]->getTensorDesc();
- // td.setDims(inputDescs[i].getDims());
- // inputs[i] = make_blob_with_precision(td, getParentEdgeAt(i)->getMemory().GetData());
- // }
- // }
std::vector outputs;
for (size_t i = 0; i < outputShapes.size(); i++) {
- if (isDynBatch) {
- auto out_edge = getChildEdgesAtPort(i)[0];
- auto td = MemoryDescUtils::convertToTensorDesc(out_edge->getMemory().getDesc());
- td.setDims(execOutputShapes[i]);
- outputs.push_back(make_blob_with_precision(td, out_edge->getMemory().GetData()));
- } else {
- outputs.push_back(MemoryDescUtils::interpretAsBlob(getChildEdgesAtPort(i)[0]->getMemory()));
- }
+ outputs.push_back(MemoryDescUtils::interpretAsBlob(getChildEdgesAtPort(i)[0]->getMemory()));
}
InferenceEngine::ResponseDesc resp;
InferenceEngine::StatusCode rc = impls[0]->execute(inputs, outputs, &resp);
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_matrix_nms_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_matrix_nms_node.h
index c1f272bd2b2..338247cf103 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_matrix_nms_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_matrix_nms_node.h
@@ -48,10 +48,10 @@ private:
static const size_t NMS_SELECTED_INDICES = 1;
static const size_t NMS_VALID_OUTPUTS = 2;
- size_t m_numBatches;
- size_t m_numBoxes;
- size_t m_numClasses;
- size_t m_maxBoxesPerBatch;
+ size_t m_numBatches = 0;
+ size_t m_numBoxes = 0;
+ size_t m_numClasses = 0;
+ size_t m_maxBoxesPerBatch = 0;
MatrixNmsSortResultType m_sortResultType;
bool m_sortResultAcrossBatch;
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.cpp
index 32b0fd6aaee..379253233ee 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.cpp
@@ -20,10 +20,15 @@ using namespace mkldnn;
using namespace MKLDNNPlugin;
using namespace InferenceEngine;
-bool MKLDNNPoolingNode::isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept {
+bool MKLDNNPoolingNode::isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept {
try {
- if (!ngraph::as_type_ptr(op) && !ngraph::as_type_ptr(op)) {
- errorMessage = "Only opset1 MaxPool and AvgPool operations are supported";
+ if (ov::is_type(op)) {
+ if (!op->get_output_target_inputs(1).empty()) {
+ errorMessage = "MaxPool from opset8 is supported only with one output";
+ return false;
+ }
+ } else if (!ov::is_type(op) && !ov::is_type(op)) {
+ errorMessage = "MaxPool and AvgPool from opset1 and MaxPool from opset8 are supported";
return false;
}
} catch (...) {
@@ -32,48 +37,52 @@ bool MKLDNNPoolingNode::isSupportedOperation(const std::shared_ptr& op, const mkldnn::engine& eng, MKLDNNWeightsSharing::Ptr &cache)
+MKLDNNPoolingNode::MKLDNNPoolingNode(const std::shared_ptr& op, const mkldnn::engine& eng, MKLDNNWeightsSharing::Ptr &cache)
: MKLDNNNode(op, eng, cache) {
std::string errorMessage;
if (!isSupportedOperation(op, errorMessage)) {
IE_THROW(NotImplemented) << errorMessage;
}
- auto maxPoolOp = ngraph::as_type_ptr(op);
- auto avgPoolOp = ngraph::as_type_ptr(op);
- if (maxPoolOp) {
+ auto get_attributes = [](std::vector& internal_attribute, const std::vector external_attribute) {
+ for (size_t i = 0; i < external_attribute.size(); i++) {
+ internal_attribute.push_back(static_cast(external_attribute[i]));
+ }
+ };
+
+ if (auto maxPoolOp_v8 = ov::as_type_ptr(op)) {
+ isMaxPool8 = true;
algorithm = PoolingMax;
exclude_pad = false;
- for (int i = 0; i < maxPoolOp->get_strides().size(); i++) {
- stride.push_back(static_cast(maxPoolOp->get_strides()[i]));
- }
- for (int i = 0; i < maxPoolOp->get_kernel().size(); i++) {
- kernel.push_back(static_cast(maxPoolOp->get_kernel()[i]));
- }
- for (int i = 0; i < maxPoolOp->get_pads_begin().size(); i++) {
- data_pad_begin.push_back(static_cast(maxPoolOp->get_pads_begin()[i]));
- }
- for (int i = 0; i < maxPoolOp->get_pads_end().size(); i++) {
- data_pad_end.push_back(static_cast(maxPoolOp->get_pads_end()[i]));
- }
- auto_pad = (maxPoolOp->get_auto_pad() == ov::op::PadType::SAME_LOWER || maxPoolOp->get_auto_pad() == ov::op::PadType::SAME_UPPER);
- } else if (avgPoolOp) {
+ get_attributes(dilation, maxPoolOp_v8->get_dilations());
+ get_attributes(stride, maxPoolOp_v8->get_strides());
+ get_attributes(kernel, maxPoolOp_v8->get_kernel());
+ get_attributes(data_pad_begin, maxPoolOp_v8->get_pads_begin());
+ get_attributes(data_pad_end, maxPoolOp_v8->get_pads_end());
+
+ auto_pad = (maxPoolOp_v8->get_auto_pad() == ov::op::PadType::SAME_LOWER || maxPoolOp_v8->get_auto_pad() == ov::op::PadType::SAME_UPPER);
+ } else if (auto maxPoolOp_v1 = ov::as_type_ptr(op)) {
+ algorithm = PoolingMax;
+ exclude_pad = false;
+
+ get_attributes(stride, maxPoolOp_v1->get_strides());
+ get_attributes(kernel, maxPoolOp_v1->get_kernel());
+ get_attributes(data_pad_begin, maxPoolOp_v1->get_pads_begin());
+ get_attributes(data_pad_end, maxPoolOp_v1->get_pads_end());
+ dilation.resize(kernel.size(), 1);
+
+ auto_pad = (maxPoolOp_v1->get_auto_pad() == ov::op::PadType::SAME_LOWER || maxPoolOp_v1->get_auto_pad() == ov::op::PadType::SAME_UPPER);
+ } else if (auto avgPoolOp = ov::as_type_ptr(op)) {
algorithm = PoolingAvg;
exclude_pad = avgPoolOp->get_exclude_pad();
- for (int i = 0; i < avgPoolOp->get_strides().size(); i++) {
- stride.push_back(static_cast(avgPoolOp->get_strides()[i]));
- }
- for (int i = 0; i < avgPoolOp->get_kernel().size(); i++) {
- kernel.push_back(static_cast(avgPoolOp->get_kernel()[i]));
- }
- for (int i = 0; i < avgPoolOp->get_pads_begin().size(); i++) {
- data_pad_begin.push_back(static_cast(avgPoolOp->get_pads_begin()[i]));
- }
- for (int i = 0; i < avgPoolOp->get_pads_end().size(); i++) {
- data_pad_end.push_back(static_cast(avgPoolOp->get_pads_end()[i]));
- }
+ get_attributes(stride, avgPoolOp->get_strides());
+ get_attributes(kernel, avgPoolOp->get_kernel());
+ get_attributes(data_pad_begin, avgPoolOp->get_pads_begin());
+ get_attributes(data_pad_end, avgPoolOp->get_pads_end());
+ dilation.resize(kernel.size(), 1);
+
auto_pad = (avgPoolOp->get_auto_pad() == ov::op::PadType::SAME_LOWER || avgPoolOp->get_auto_pad() == ov::op::PadType::SAME_UPPER);
}
}
@@ -94,20 +103,23 @@ std::vector MKLDNNPoolingNode::getAvailableFormatsForDims(co
return {memory::format_tag::any};
}
-void MKLDNNPoolingNode::initEffectivePad(const Shape &inShape, const Shape &outShape) {
+void MKLDNNPoolingNode::initEffectiveAttributes(const Shape &inShape, const Shape &outShape) {
effective_pad_begin = data_pad_begin;
effective_pad_end.resize(data_pad_end.size());
+ effective_dilation.resize(dilation.size(), 0);
const auto &inDims = inShape.getStaticDims();
const auto &outDims = outShape.getStaticDims();
for (int i = 0; i < effective_pad_end.size(); i++) {
int krn = kernel[i];
+ int dil = dilation[i];
int src = inDims[2 + i];
int dst = outDims[2 + i];
- int calc_dst = (src - krn + data_pad_begin[i]) / stride[i] + 1;
+ int calc_dst = (src - (1 + (krn - 1) * dil) + data_pad_begin[i]) / stride[i] + 1;
effective_pad_end[i] = (dst - calc_dst) * stride[i];
+ effective_dilation[i] = dil - 1;
}
}
@@ -120,8 +132,8 @@ void MKLDNNPoolingNode::getSupportedDescriptors() {
if (getChildEdges().empty())
IE_THROW() << "Incorrect number of output edges for layer " << getName();
- inputPrecision = getOriginalInputPrecisionAtPort(0);
- outputPrecision = getOriginalOutputPrecisionAtPort(0);
+ InferenceEngine::Precision inputPrecision = getOriginalInputPrecisionAtPort(0);
+ InferenceEngine::Precision outputPrecision = getOriginalOutputPrecisionAtPort(0);
// WA: LPT transformation has WA which allows average pooling has I8/U8 output precision instead of FP32,
// so we explicitly set output precision as FP32
@@ -151,8 +163,8 @@ void MKLDNNPoolingNode::getSupportedDescriptors() {
if ((inputRank < 3) || (inputRank > 5))
IE_THROW() << "Pooling layer. Unsupported mode. Only 3D, 4D and 5D blobs are supported as input.";
- initEffectivePad(MemoryDescUtils::makeDummyShape(parentShape),
- MemoryDescUtils::makeDummyShape(childShape));
+ initEffectiveAttributes(MemoryDescUtils::makeDummyShape(parentShape),
+ MemoryDescUtils::makeDummyShape(childShape));
if (inputPrecision == Precision::I8 || inputPrecision == Precision::U8) {
// We have to extend i8i8_pooling_fwd_t from oneDNN to support BF16 output data type
@@ -185,7 +197,7 @@ void MKLDNNPoolingNode::getSupportedDescriptors() {
}
}
-std::pair, std::vector> MKLDNNPoolingNode::getPaddingFromNode(std::shared_ptr node) const {
+std::pair, std::vector> MKLDNNPoolingNode::getPaddingFromNode(std::shared_ptr node) const {
const auto convertPadding = [](const VectorDims &newPads) {
std::vector pads(newPads.size());
for (int i = 0; i < newPads.size(); i++) {
@@ -195,12 +207,16 @@ std::pair, std::vector> MKLDNNPoolingNode::get
};
VectorDims padsBegin, padsEnd;
- if (getAlgorithm() == PoolingMax) {
- const auto pool = ngraph::as_type_ptr(opToShapeInfer);
+ if (isMaxPool8) {
+ const auto pool = ov::as_type_ptr(opToShapeInfer);
+ padsBegin = pool->get_pads_begin();
+ padsEnd = pool->get_pads_end();
+ } else if (getAlgorithm() == PoolingMax) {
+ const auto pool = ov::as_type_ptr(opToShapeInfer);
padsBegin = pool->get_pads_begin();
padsEnd = pool->get_pads_end();
} else if (getAlgorithm() == PoolingAvg) {
- const auto pool = ngraph::as_type_ptr(opToShapeInfer);
+ const auto pool = ov::as_type_ptr(opToShapeInfer);
padsBegin = pool->get_pads_begin();
padsEnd = pool->get_pads_end();
}
@@ -231,15 +247,15 @@ void MKLDNNPoolingNode::prepareParams() {
if (auto_pad) {
std::tie(data_pad_begin, data_pad_end) = getPaddingFromNode(opToShapeInfer);
}
- initEffectivePad(inDesc->getShape(), outDesc->getShape());
+ initEffectiveAttributes(inDesc->getShape(), outDesc->getShape());
}
mkldnn::algorithm alg = getPoolingAlgorithm();
MKLDNNDescriptor desc{createDescriptorInternal(in_candidate, out_candidate, alg)};
- pooling_forward::primitive_desc prim_desc;
+ pooling_v2_forward::primitive_desc prim_desc;
primitive_desc_iterator itpd = desc.createPrimitiveDescriptorIterator(getEngine(), *attr);
- while (static_cast(itpd)) {
+ while (static_cast(itpd)) {
impl_desc_type impl_type = parse_impl_name(itpd.impl_info_str());
if (impl_type == selected_pd->getImplementationType()) {
@@ -250,7 +266,7 @@ void MKLDNNPoolingNode::prepareParams() {
IE_THROW() << "Primitive descriptor was not found for node " << getName() << ".";
}
- prim.reset(new pooling_forward(prim_desc));
+ prim.reset(new pooling_v2_forward(prim_desc));
auto src = getParentEdgesAtPort(0)[0]->getMemoryPtr()->GetPrimitive();
auto dst = getChildEdgesAtPort(0)[0]->getMemoryPtr()->GetPrimitive();
@@ -296,9 +312,9 @@ mkldnn::algorithm MKLDNNPoolingNode::getPoolingAlgorithm() const {
}
}
-std::shared_ptr MKLDNNPoolingNode::createDescriptorInternal(const mkldnn::memory::desc& in_candidate,
- const mkldnn::memory::desc& out_candidate,
- const mkldnn::algorithm alg) const {
+std::shared_ptr MKLDNNPoolingNode::createDescriptorInternal(const mkldnn::memory::desc& in_candidate,
+ const mkldnn::memory::desc& out_candidate,
+ const mkldnn::algorithm alg) const {
if (alg == mkldnn::algorithm::undef) {
IE_THROW() << "Unsupported pooling type";
}
@@ -306,13 +322,14 @@ std::shared_ptr MKLDNNPoolingNode::createDescriptorIntern
auto convert = [] (std::vector orig_dims) {
return memory::dims(orig_dims.begin(), orig_dims.end());
};
- std::shared_ptr desc_ptr(
- new pooling_forward::desc(prop_kind::forward_scoring, alg,
- in_candidate, out_candidate,
- convert(stride),
- convert(kernel),
- convert(effective_pad_begin),
- convert(effective_pad_end)));
+ std::shared_ptr desc_ptr(
+ new pooling_v2_forward::desc(prop_kind::forward_scoring, alg,
+ in_candidate, out_candidate,
+ convert(stride),
+ convert(kernel),
+ convert(effective_dilation),
+ convert(effective_pad_begin),
+ convert(effective_pad_end)));
if (alg == mkldnn::algorithm::pooling_avg_include_padding) {
// In case of AVG including paddings the norm coeff should be calculated
@@ -343,14 +360,12 @@ void MKLDNNPoolingNode::createDescriptor(const std::vector &input
if (auto_pad) {
std::tie(data_pad_begin, data_pad_end) = getPaddingFromNode(opToShapeInfer);
}
- initEffectivePad(inDesc->getShape(), outDesc->getShape());
+ initEffectiveAttributes(inDesc->getShape(), outDesc->getShape());
}
auto dnnlOutDesc = MemoryDescUtils::convertToDnnlBlockedMemoryDesc(*outDesc);
auto out_candidate = dnnlOutDesc.getDnnlDesc();
- mkldnn::algorithm alg = getPoolingAlgorithm();
- auto desc_ptr = createDescriptorInternal(in_candidate, out_candidate, alg);
-
+ auto desc_ptr = createDescriptorInternal(in_candidate, out_candidate, getPoolingAlgorithm());
descs.emplace_back(desc_ptr);
}
@@ -383,6 +398,18 @@ void MKLDNNPoolingNode::initSupportedPrimitiveDescriptors() {
config.outConfs.push_back(dataConfig);
}
+
+ // CPU plugin doesn't support second output of MaxPool-8, but anyway we should have out config for second port as stub
+ if (isMaxPool8) {
+ auto& creatorsMap = BlockedDescCreator::getCommonCreators();
+ PortConfig dataConfig;
+ dataConfig.inPlace = -1;
+ dataConfig.constant = false;
+ dataConfig.desc = creatorsMap.at(LayoutType::ncsp)->createSharedDesc(config.outConfs.front().desc->getPrecision(), getOutputShapeAtPort(1));
+
+ config.outConfs.push_back(dataConfig);
+ }
+
impl_desc_type impl_type = parse_impl_name(itpd.impl_info_str());
supportedPrimitiveDescriptors.emplace_back(config, impl_type);
@@ -434,6 +461,18 @@ void MKLDNNPoolingNode::initDescriptor(const NodeConfig& config) {
dataConfig.desc = getDstMemDesc(itpd, i);
cfg.outConfs.push_back(dataConfig);
}
+
+ // CPU plugin doesn't support second output of MaxPool-8, but anyway we should have out config for second port as stub
+ if (isMaxPool8) {
+ auto& creatorsMap = BlockedDescCreator::getCommonCreators();
+ PortConfig dataConfig;
+ dataConfig.inPlace = -1;
+ dataConfig.constant = false;
+ dataConfig.desc = creatorsMap.at(LayoutType::ncsp)->createSharedDesc(cfg.outConfs.front().desc->getPrecision(), getOutputShapeAtPort(1));
+
+ cfg.outConfs.push_back(dataConfig);
+ }
+
impl_desc_type impl_type = parse_impl_name(itpd.impl_info_str());
if (selected_count == selectedPrimitiveDescriptorIndex) {
if (impl_type != selectedPD->getImplementationType()) {
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.h
index 1d91199f95a..f3a6fc781cc 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_pooling_node.h
@@ -14,7 +14,7 @@ namespace MKLDNNPlugin {
class MKLDNNPoolingNode : public MKLDNNNode {
public:
- MKLDNNPoolingNode(const std::shared_ptr& op, const mkldnn::engine& eng, MKLDNNWeightsSharing::Ptr &cache);
+ MKLDNNPoolingNode(const std::shared_ptr& op, const mkldnn::engine& eng, MKLDNNWeightsSharing::Ptr &cache);
void createDescriptor(const std::vector& inputDesc,
const std::vector& outputDesc) override;
@@ -31,7 +31,7 @@ public:
void prepareParams() override;
void executeDynamicImpl(mkldnn::stream strm) override { execute(strm); }
- static bool isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept;
+ static bool isSupportedOperation(const std::shared_ptr& op, std::string& errorMessage) noexcept;
protected:
AttrPtr initPrimitiveAttr() const override;
@@ -39,17 +39,19 @@ protected:
private:
void setPostOps(mkldnn::primitive_attr &attr, bool initWeights = false) const;
- std::pair, std::vector> getPaddingFromNode(std::shared_ptr node) const;
- void initEffectivePad(const Shape &inDims, const Shape &outDims);
+ std::pair, std::vector> getPaddingFromNode(std::shared_ptr node) const;
+ void initEffectiveAttributes(const Shape &inDims, const Shape &outDims);
mkldnn::algorithm getPoolingAlgorithm() const;
- std::shared_ptr createDescriptorInternal(const mkldnn::memory::desc& in_candidate,
- const mkldnn::memory::desc& out_candidate,
- const mkldnn::algorithm alg) const;
+ std::shared_ptr createDescriptorInternal(const mkldnn::memory::desc& in_candidate,
+ const mkldnn::memory::desc& out_candidate,
+ const mkldnn::algorithm alg) const;
AttrPtr pAttr;
+ bool isMaxPool8 = false;
bool auto_pad = false;
bool exclude_pad = false;
+ std::vector dilation;
std::vector stride;
std::vector kernel;
@@ -59,15 +61,16 @@ private:
std::vector effective_pad_begin;
std::vector effective_pad_end;
+ /// Effective dilation. Used to define correct dilation for OneDNN.
+ /// For OneDNN default dilation is vector of zero
+ std::vector effective_dilation;
+
/// Effective pad value. Describe how much zero element added to input
/// data tensor. May be less than "Effective padding" values.
/// If pooling window is out of this padding, the region of averaging
/// is decreased.
std::vector data_pad_begin;
std::vector data_pad_end;
-
- InferenceEngine::Precision inputPrecision = InferenceEngine::Precision::FP32;
- InferenceEngine::Precision outputPrecision = InferenceEngine::Precision::FP32;
};
} // namespace MKLDNNPlugin
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_reduce_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_reduce_node.cpp
index 3ecc41eee7e..4ccaf709471 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_reduce_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_reduce_node.cpp
@@ -138,7 +138,7 @@ private:
using Vmm = typename conditional3::type;
size_t vlen = cpu_isa_traits::vlen;
- bool planar_layout;
+ bool planar_layout = false;
Xbyak::Address table_val(int index) { return ptr[reg_table + index * vlen]; }
@@ -1136,7 +1136,7 @@ private:
using Vmm = typename conditional3::type;
size_t vlen = cpu_isa_traits::vlen;
- bool planar_layout;
+ bool planar_layout = false;
Xbyak::Reg64 reg_dst = r8;
Xbyak::Reg64 reg_work_amount = r9;
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_rnn.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_rnn.cpp
index 7c3e3eb2c20..76e20434d1e 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_rnn.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_rnn.cpp
@@ -367,7 +367,7 @@ void MKLDNNRNN::initSeq(const std::shared_ptr& op) {
const auto rtInfo = op->get_rt_info();
if (rtInfo.count("seqAxis")) {
- nativeOrder = std::dynamic_pointer_cast>(rtInfo.at("seqAxis"))->get() == 0;
+ nativeOrder = std::dynamic_pointer_cast>(rtInfo.at("seqAxis"))->get() == 0;
}
out_data_dims.erase(out_data_dims.begin() + 1);
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_scatter_update_node.cpp b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_scatter_update_node.cpp
index 566fe7fb00b..412d3853f1a 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_scatter_update_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_scatter_update_node.cpp
@@ -114,13 +114,16 @@ void MKLDNNScatterUpdateNode::initSupportedPrimitiveDescriptors() {
<< "which should be smaller than or equal to input tensor rank";
}
- SizeVector expectUpdateShape = {};
size_t tupleRank = indicesRank - 1;
+ SizeVector expectUpdateShape(tupleRank + srcRank - k, 0);
+ int updateAxisIter = 0;
for (size_t ri = 0; ri < tupleRank; ri++) {
- expectUpdateShape.push_back(indicesDim[ri]);
+ expectUpdateShape[updateAxisIter] = indicesDim[ri];
+ updateAxisIter++;
}
for (size_t rd = k; rd < srcRank; rd++) {
- expectUpdateShape.push_back(srcDataDim[rd]);
+ expectUpdateShape[updateAxisIter] = srcDataDim[rd];
+ updateAxisIter++;
}
if (expectUpdateShape.size() != updateRank) {
IE_THROW() << errorPrefix << " do not have matched tensor rank relationship for input, indices and update";
@@ -315,13 +318,16 @@ void MKLDNNScatterUpdateNode::execute(mkldnn::stream strm) {
SizeVector updateDim = getParentEdgeAt(UPDATE_ID)->getMemory().getStaticDims();
size_t indicesRank = indicesDim.size();
size_t updateRank = updateDim.size();
- SizeVector expectUpdateShape = {};
+ SizeVector expectUpdateShape(srcRank + indicesRank - 1, 0);
+ int axisIter = 0;
for (size_t rs = 0; rs < srcRank; rs++) {
if (rs != axis) {
- expectUpdateShape.push_back(srcDataDim[rs]);
+ expectUpdateShape[axisIter] = srcDataDim[rs];
+ axisIter++;
} else {
for (size_t ri = 0; ri < indicesRank; ri++) {
- expectUpdateShape.push_back(indicesDim[ri]);
+ expectUpdateShape[axisIter] = indicesDim[ri];
+ axisIter++;
}
}
}
diff --git a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_topk_node.h b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_topk_node.h
index 83773eb7eec..bd2a72824cc 100644
--- a/inference-engine/src/mkldnn_plugin/nodes/mkldnn_topk_node.h
+++ b/inference-engine/src/mkldnn_plugin/nodes/mkldnn_topk_node.h
@@ -92,7 +92,8 @@ private:
bool sort_value = false;
bool mode_max = true;
- int dim, before_num;
+ int dim = 0;
+ int before_num = 0;
std::string errorPrefix;
diff --git a/inference-engine/src/mkldnn_plugin/utils/ngraph_utils.hpp b/inference-engine/src/mkldnn_plugin/utils/ngraph_utils.hpp
index 08ae121a7db..fe27efb885d 100644
--- a/inference-engine/src/mkldnn_plugin/utils/ngraph_utils.hpp
+++ b/inference-engine/src/mkldnn_plugin/utils/ngraph_utils.hpp
@@ -13,7 +13,7 @@ namespace MKLDNNPlugin {
inline std::string getRTInfoValue(const std::map& rtInfo, std::string paramName) {
auto it = rtInfo.find(paramName);
if (it != rtInfo.end()) {
- auto value = std::dynamic_pointer_cast>(it->second);
+ auto value = std::dynamic_pointer_cast>(it->second);
return value->get();
} else {
return "";
@@ -23,10 +23,13 @@ inline std::string getRTInfoValue(const std::map& rtInfo,
inline std::string getPrimitivesPriorityValue(const std::shared_ptr &node) {
const auto &rtInfo = node->get_rt_info();
- if (!rtInfo.count(ov::PrimitivesPriority::get_type_info_static())) return "";
+ auto it_info = rtInfo.find(ov::PrimitivesPriority::get_type_info_static());
- const auto &attr = rtInfo.at(ov::PrimitivesPriority::get_type_info_static());
- return ngraph::as_type_ptr(attr)->get();
+ if (it_info == rtInfo.end()) {
+ return {};
+ }
+
+ return it_info->second.as().value;
}
template
diff --git a/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.cpp b/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.cpp
index 5dca9699645..9217816bca7 100644
--- a/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.cpp
+++ b/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.cpp
@@ -11,25 +11,25 @@
using namespace ngraph;
using namespace ov;
-MLKDNNInputMemoryFormats::~MLKDNNInputMemoryFormats() = default;
+MKLDNNInputMemoryFormats::~MKLDNNInputMemoryFormats() = default;
-std::string ngraph::getMLKDNNInputMemoryFormats(const std::shared_ptr& node) {
- auto it_info = node->get_rt_info().find(MLKDNNInputMemoryFormatsAttr);
+std::string ngraph::getMKLDNNInputMemoryFormats(const std::shared_ptr& node) {
+ auto it_info = node->get_rt_info().find(MKLDNNInputMemoryFormats::get_type_info_static());
if (it_info != node->get_rt_info().end()) {
- if (auto ptr = it_info->second.as>()) {
- return ptr->getMemoryFormats();
+ if (it_info->second.is()) {
+ return it_info->second.as().getMemoryFormats();
}
}
return {};
}
-MLKDNNOutputMemoryFormats::~MLKDNNOutputMemoryFormats() = default;
+MKLDNNOutputMemoryFormats::~MKLDNNOutputMemoryFormats() = default;
-std::string ngraph::getMLKDNNOutputMemoryFormats(const std::shared_ptr& node) {
- auto it_info = node->get_rt_info().find(MLKDNNOutputMemoryFormatsAttr);
+std::string ngraph::getMKLDNNOutputMemoryFormats(const std::shared_ptr& node) {
+ auto it_info = node->get_rt_info().find(MKLDNNOutputMemoryFormats::get_type_info_static());
if (it_info != node->get_rt_info().end()) {
- if (auto ptr = it_info->second.as>()) {
- return ptr->getMemoryFormats();
+ if (it_info->second.is()) {
+ return it_info->second.as().getMemoryFormats();
}
}
return {};
diff --git a/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.hpp b/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.hpp
index c2e7498bb58..ea5611558b6 100644
--- a/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.hpp
+++ b/inference-engine/src/mkldnn_plugin/utils/rt_info/memory_formats_attribute.hpp
@@ -12,30 +12,28 @@
namespace ngraph {
-constexpr const char *MLKDNNInputMemoryFormatsAttr = "MLKDNNInputMemoryFormats";
-constexpr const char *MLKDNNOutputMemoryFormatsAttr = "MLKDNNOutputMemoryFormats";
+constexpr const char *MKLDNNInputMemoryFormatsAttr = "MKLDNNInputMemoryFormats";
+constexpr const char *MKLDNNOutputMemoryFormatsAttr = "MKLDNNOutputMemoryFormats";
template
-class MLKDNNMemoryFormats : public Variant {
+class MKLDNNMemoryFormats : public ov::RuntimeAttribute {
protected:
std::string memory_format;
public:
- MLKDNNMemoryFormats() = default;
- explicit MLKDNNMemoryFormats(const std::string &_memory_format) : memory_format(_memory_format) {}
+ MKLDNNMemoryFormats() = default;
+ explicit MKLDNNMemoryFormats(const std::string &_memory_format) : memory_format(_memory_format) {}
std::string getMemoryFormats() const { return memory_format; }
- ov::Any merge(const ngraph::NodeVector & nodes) override {
+ ov::Any merge(const ngraph::NodeVector & nodes) const override {
std::set unique_mem_format;
for (auto &node : nodes) {
- auto it_info = node->get_rt_info().find(MemoryFormat::get_type_info_static().name);
+ auto it_info = node->get_rt_info().find(MemoryFormat::get_type_info_static());
if (it_info != node->get_rt_info().end()) {
- if (auto ptr = it_info->second.template as>()) {
- std::string mem_format = ptr->getMemoryFormats();
- if (!mem_format.empty()) {
- unique_mem_format.insert(mem_format);
- }
+ std::string mem_format = it_info->second.template as().getMemoryFormats();
+ if (!mem_format.empty()) {
+ unique_mem_format.insert(mem_format);
}
}
}
@@ -50,28 +48,28 @@ public:
if (unique_mem_format.size() == 1) {
final_mem_format = *unique_mem_format.begin();
}
- return std::make_shared(final_mem_format);
+ return MemoryFormat{final_mem_format};
}
};
-class MLKDNNInputMemoryFormats : public MLKDNNMemoryFormats {
+class MKLDNNInputMemoryFormats : public MKLDNNMemoryFormats {
public:
- OPENVINO_RTTI(MLKDNNInputMemoryFormatsAttr);
- MLKDNNInputMemoryFormats() = default;
- explicit MLKDNNInputMemoryFormats(const std::string &_memory_format) : MLKDNNMemoryFormats(_memory_format) {}
- ~MLKDNNInputMemoryFormats() override;
+ OPENVINO_RTTI(MKLDNNInputMemoryFormatsAttr);
+ MKLDNNInputMemoryFormats() = default;
+ explicit MKLDNNInputMemoryFormats(const std::string &_memory_format) : MKLDNNMemoryFormats(_memory_format) {}
+ ~MKLDNNInputMemoryFormats() override;
};
-std::string getMLKDNNInputMemoryFormats(const std::shared_ptr& node);
+std::string getMKLDNNInputMemoryFormats(const std::shared_ptr& node);
-class MLKDNNOutputMemoryFormats : public MLKDNNMemoryFormats {
+class MKLDNNOutputMemoryFormats : public MKLDNNMemoryFormats {
public:
- OPENVINO_RTTI(MLKDNNOutputMemoryFormatsAttr);
- MLKDNNOutputMemoryFormats() = default;
- explicit MLKDNNOutputMemoryFormats(const std::string &_memory_format) : MLKDNNMemoryFormats(_memory_format) {}
- ~MLKDNNOutputMemoryFormats() override;
+ OPENVINO_RTTI(MKLDNNOutputMemoryFormatsAttr);
+ MKLDNNOutputMemoryFormats() = default;
+ explicit MKLDNNOutputMemoryFormats(const std::string &_memory_format) : MKLDNNMemoryFormats(_memory_format) {}
+ ~MKLDNNOutputMemoryFormats() override;
};
-std::string getMLKDNNOutputMemoryFormats(const std::shared_ptr& node);
+std::string getMKLDNNOutputMemoryFormats(const std::shared_ptr& node);
} // namespace ngraph
diff --git a/inference-engine/src/offline_transformations/include/mask_attribute.hpp b/inference-engine/src/offline_transformations/include/mask_attribute.hpp
index 4c74a64c074..0d57808949b 100644
--- a/inference-engine/src/offline_transformations/include/mask_attribute.hpp
+++ b/inference-engine/src/offline_transformations/include/mask_attribute.hpp
@@ -16,7 +16,6 @@
#include
#include
-#include
namespace ngraph {
@@ -28,6 +27,11 @@ namespace ngraph {
class Mask : public std::vector>,
public std::enable_shared_from_this {
public:
+ static const ::ov::DiscreteTypeInfo& get_type_info_static() {
+ static const ::ov::DiscreteTypeInfo type_info{"Mask", 0, "0"};
+ return type_info;
+ }
+
using Ptr = std::shared_ptr;
Mask() = default;
@@ -180,6 +184,7 @@ public:
item.clear();
}
}
+
private:
bool m_is_shape_like{false};
@@ -199,22 +204,3 @@ Mask::Ptr getMask(const Output & output);
void setMask(Output output, const Mask::Ptr & mask);
} // namespace ngraph
-
-namespace ov {
-
-extern template class VariantImpl;
-
-template<>
-class VariantWrapper : public VariantImpl {
-public:
- OPENVINO_RTTI("VariantWrapper");
- BWDCMP_RTTI_DECLARATION;
-
- static std::shared_ptr> create(const value_type & value) {
- return std::make_shared>(value);
- }
-
- explicit VariantWrapper(const value_type &value) : VariantImpl(value) {}
-};
-
-} // namespace ov
diff --git a/inference-engine/src/offline_transformations/src/pruning/mask_attribute.cpp b/inference-engine/src/offline_transformations/src/pruning/mask_attribute.cpp
index 9ee1c023137..90f79e049f9 100644
--- a/inference-engine/src/offline_transformations/src/pruning/mask_attribute.cpp
+++ b/inference-engine/src/offline_transformations/src/pruning/mask_attribute.cpp
@@ -14,28 +14,22 @@ namespace ngraph {
Mask::Ptr getMask(const Output & output) {
auto &rtInfo = output.get_rt_info();
- using MaskWrapper = VariantWrapper;
+ if (!rtInfo.count(Mask::get_type_info_static())) return nullptr;
- if (!rtInfo.count(MaskWrapper::get_type_info_static().name)) return nullptr;
-
- const auto &attr = rtInfo.at(MaskWrapper::get_type_info_static().name);
- return ov::as_type_ptr(attr)->get();
+ const auto &attr = rtInfo.at(Mask::get_type_info_static());
+ return attr.as();
}
Mask::Ptr getMask(const Output & output) {
auto &rtInfo = output.get_rt_info();
- using MaskWrapper = VariantWrapper;
-
- if (!rtInfo.count(MaskWrapper::get_type_info_static().name)) return nullptr;
-
- const auto &attr = rtInfo.at(MaskWrapper::get_type_info_static().name);
- return ov::as_type_ptr(attr)->get();
+ if (!rtInfo.count(Mask::get_type_info_static())) return nullptr;
+ const auto &attr = rtInfo.at(Mask::get_type_info_static());
+ return attr.as();
}
void setMask(Output output, const Mask::Ptr & mask) {
auto &rtInfo = output.get_rt_info();
- using MaskWrapper = VariantWrapper;
- rtInfo[MaskWrapper::get_type_info_static().name] = MaskWrapper::create(mask);
+ rtInfo[Mask::get_type_info_static()] = mask;
}
std::ostream & operator<< (std::ostream & out, const Mask & mask) {
@@ -54,11 +48,3 @@ std::ostream & operator<< (std::ostream & out, const Mask & mask) {
}
} // namespace ngraph
-
-namespace ov {
-
-template class ngraph::VariantImpl;
-
-BWDCMP_RTTI_DEFINITION(VariantWrapper);
-
-} // namespace ov
diff --git a/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_deserialization.cpp b/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_deserialization.cpp
index 2ba736b3090..c8c14c7b7d7 100644
--- a/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_deserialization.cpp
+++ b/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_deserialization.cpp
@@ -40,7 +40,7 @@ protected:
ov::frontend::FrontEnd::Ptr FE;
ov::frontend::InputModel::Ptr inputModel;
- ov::VariantVector params{ov::make_variant(&modelStream)};
+ ov::RuntimeAttributeVector params{&modelStream};
FE = manager.load_by_model(params);
if (FE)
@@ -119,7 +119,7 @@ TEST_F(RTInfoDeserialization, NodeV10) {
ASSERT_NE(nullptr, f);
auto check_rt_info = [](const RTMap& info) {
- const std::string& key = VariantWrapper::get_type_info_static();
+ const std::string& key = ngraph::FusedNames::get_type_info_static();
EXPECT_FALSE(info.count(key));
const std::string& key_old_api_order = ov::OldApiMapOrder::get_type_info_static();
@@ -278,7 +278,7 @@ TEST_F(RTInfoDeserialization, InputAndOutputV10) {
ASSERT_NE(nullptr, f);
auto check_rt_info = [](const RTMap& info) {
- const std::string& key = VariantWrapper::get_type_info_static();
+ const std::string& key = ngraph::FusedNames::get_type_info_static();
ASSERT_FALSE(info.count(key));
};
@@ -421,27 +421,22 @@ TEST_F(RTInfoDeserialization, NodeV11) {
ASSERT_NE(nullptr, f);
auto check_fused_names = [](const RTMap& info, const std::string& names) {
- const std::string& key = VariantWrapper::get_type_info_static();
+ const std::string& key = ngraph::FusedNames::get_type_info_static();
ASSERT_TRUE(info.count(key));
- auto fused_names_attr = std::dynamic_pointer_cast>(info.at(key));
- ASSERT_TRUE(fused_names_attr);
- EXPECT_EQ(fused_names_attr->get().getNames(), names);
+ auto fused_names_attr = info.at(key).as();
+ EXPECT_EQ(fused_names_attr.getNames(), names);
};
auto check_old_api_map_order = [](const RTMap & info, const std::vector & order) {
const std::string & old_api_map_key = ov::OldApiMapOrder::get_type_info_static();
ASSERT_TRUE(info.count(old_api_map_key));
- auto old_api_map_attr = std::dynamic_pointer_cast(info.at(old_api_map_key));
- ASSERT_TRUE(old_api_map_attr);
- auto old_api_map_attr_val = old_api_map_attr->get();
+ auto old_api_map_attr_val = info.at(old_api_map_key).as().value;
EXPECT_EQ(old_api_map_attr_val, order);
};
auto check_old_api_map_type = [](const RTMap & info, const ngraph::element::Type& type) {
const std::string & old_api_map_key = ov::OldApiMapElementType::get_type_info_static();
ASSERT_TRUE(info.count(old_api_map_key));
- auto old_api_map_attr = std::dynamic_pointer_cast(info.at(old_api_map_key));
- ASSERT_TRUE(old_api_map_attr);
- auto old_api_map_attr_val = old_api_map_attr->get();
+ auto old_api_map_attr_val = info.at(old_api_map_key).as().value;
EXPECT_EQ(old_api_map_attr_val, type);
};
@@ -501,8 +496,7 @@ TEST_F(RTInfoDeserialization, NodeV11) {
auto round = std::make_shared(convert_param,
ngraph::opset8::Round::RoundMode::HALF_TO_EVEN);
// TODO: runtime information should migrate as well?
- round->get_rt_info()[VariantWrapper::get_type_info_static()] =
- std::make_shared>(ngraph::FusedNames("Round1,Round2"));
+ round->get_rt_info()[ngraph::FusedNames::get_type_info_static()] = ngraph::FusedNames("Round1,Round2");
// TODO: No guarantee that exactly 'convert, then transpose' will be added by implicit post-processing
auto constant_result = std::make_shared(ngraph::element::u64,
@@ -722,20 +716,20 @@ TEST_F(RTInfoDeserialization, InputAndOutputV11) {
check_version(f, 11);
auto check_fused_names = [](const RTMap& info, const std::string& names) {
- const std::string& key = VariantWrapper::get_type_info_static();
+ const std::string& key = ngraph::FusedNames::get_type_info_static();
ASSERT_TRUE(info.count(key));
- auto fused_names_attr = std::dynamic_pointer_cast>(info.at(key));
- ASSERT_TRUE(fused_names_attr);
- ASSERT_EQ(fused_names_attr->get().getNames(), names);
+ auto fused_names_attr = info.at(key).as();
+ ASSERT_EQ(fused_names_attr.getNames(), names);
};
auto param = f->get_parameters()[0];
check_fused_names(param->output(0).get_rt_info(), "test1,test2");
EXPECT_EQ(param->get_layout(), "NCHW");
- auto var0 = std::dynamic_pointer_cast(
- f->input(0).get_rt_info()[ov::preprocess::TensorInfoMemoryType::get_type_info_static()]);
- EXPECT_EQ(var0->get(), "test_memory_type");
+ auto var0 = f->input(0).get_rt_info()
+ .at(ov::preprocess::TensorInfoMemoryType::get_type_info_static())
+ .as().value;
+ EXPECT_EQ(var0, "test_memory_type");
auto result = f->get_result();
check_fused_names(result->input(0).get_rt_info(), "test5,test6");
diff --git a/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_serialization.cpp b/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_serialization.cpp
index 8f32c89d1ec..e3cf7db8ae9 100644
--- a/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_serialization.cpp
+++ b/inference-engine/tests/functional/inference_engine/ir_serialization/rt_info_serialization.cpp
@@ -34,7 +34,7 @@ protected:
ov::frontend::FrontEnd::Ptr FE;
ov::frontend::InputModel::Ptr inputModel;
- ov::VariantVector params{ov::make_variant(model_path), ov::make_variant(weights_path)};
+ ov::RuntimeAttributeVector params{model_path, weights_path};
FE = manager.load_by_model(params);
if (FE)
@@ -51,16 +51,12 @@ private:
};
TEST_F(RTInfoSerializationTest, all_attributes_latest) {
- auto init_info = [](RTMap& info) {
- info[VariantWrapper::get_type_info_static()] =
- std::make_shared>(ngraph::FusedNames("add"));
- info[ov::PrimitivesPriority::get_type_info_static()] =
- std::make_shared("priority");
- info[ov::OldApiMapOrder::get_type_info_static()] =
- std::make_shared(std::vector{0, 2, 3, 1});
- info[ov::OldApiMapElementType::get_type_info_static()] = std::make_shared(
- ngraph::element::Type_t::f32);
- info[ov::Decompression::get_type_info_static()] = std::make_shared();
+ auto init_info = [](RTMap & info) {
+ info[ngraph::FusedNames::get_type_info_static()] = ngraph::FusedNames("add");
+ info[ov::PrimitivesPriority::get_type_info_static()] = ov::PrimitivesPriority("priority");
+ info[ov::OldApiMapOrder::get_type_info_static()] = ov::OldApiMapOrder(std::vector{0, 2, 3, 1});
+ info[ov::OldApiMapElementType::get_type_info_static()] = ov::OldApiMapElementType(ngraph::element::Type_t::f32);
+ info[ov::Decompression::get_type_info_static()] = ov::Decompression{};
};
std::shared_ptr function;
@@ -85,36 +81,29 @@ TEST_F(RTInfoSerializationTest, all_attributes_latest) {
ASSERT_NE(nullptr, f);
auto check_info = [](const RTMap & info) {
- const std::string & key = VariantWrapper::get_type_info_static();
+ const std::string & key = ngraph::FusedNames::get_type_info_static();
ASSERT_TRUE(info.count(key));
- auto fused_names_attr = std::dynamic_pointer_cast>(info.at(key));
- ASSERT_TRUE(fused_names_attr);
- ASSERT_EQ(fused_names_attr->get().getNames(), "add");
+ auto fused_names_attr = info.at(key).as();
+ ASSERT_EQ(fused_names_attr.getNames(), "add");
const std::string & pkey = ov::PrimitivesPriority::get_type_info_static();
ASSERT_TRUE(info.count(pkey));
- auto primitives_priority_attr = std::dynamic_pointer_cast(info.at(pkey));
- ASSERT_TRUE(primitives_priority_attr);
- ASSERT_EQ(primitives_priority_attr->get(), "priority");
+ auto primitives_priority_attr = info.at(pkey).as().value;
+ ASSERT_EQ(primitives_priority_attr, "priority");
const std::string & old_api_map_key_order = ov::OldApiMapOrder::get_type_info_static();
ASSERT_TRUE(info.count(old_api_map_key_order));
- auto old_api_map_attr = std::dynamic_pointer_cast(info.at(old_api_map_key_order));
- ASSERT_TRUE(old_api_map_attr);
- auto old_api_map_attr_val = old_api_map_attr->get();
+ auto old_api_map_attr_val = info.at(old_api_map_key_order).as