checking the network batch-ability (internal helper func on top of bat… (#10446)

* checking the network batchability (internal helper func on top of batch tracking) before doing hetero

* more general logic with respect to batch-ability of the network

* a dynamism check that I've owed from the PR-10560

* using the DO-detached mechanism for early hetero exit, also fixed this flag in the Batching plugin (although minor, as the DO is removed by HETERO)

* adding the dimension tracking logic depending on whether implicitly/expicitly the auto-batching is enabled

* changed the DetectionOutput affinity markup to go over results, also accomodate Convert, so only 2 subgraphs are made by the HETERO
This commit is contained in:
Maxim Shevtsov
2022-02-22 17:19:23 +03:00
committed by GitHub
parent e59739ce88
commit dab1a34aa2
4 changed files with 120 additions and 30 deletions

View File

@@ -841,7 +841,7 @@ InferenceEngine::IExecutableNetworkInternal::Ptr AutoBatchInferencePlugin::LoadN
// find the batch dim
ov::pass::Manager m;
m.register_pass<ngraph::pass::InitNodeInfo>();
m.register_pass<ov::pass::FindBatch>(true, check_dims);
m.register_pass<ov::pass::FindBatch>(false, check_dims);
m.run_passes(function);
// do not reshape/re-batch originally batched networks and when there are no inputs with the N* layouts
// input(s) should have the batch dim as the first dim (current limitation of the auto-batching impl)
@@ -871,6 +871,8 @@ InferenceEngine::IExecutableNetworkInternal::Ptr AutoBatchInferencePlugin::LoadN
for (size_t output_id = 0; output_id < results.size(); output_id++) {
const auto& output = results[output_id];
const auto& shape = output->get_output_partial_shape(0);
if (shape.is_dynamic())
IE_THROW(NotImplemented) << "Auto-batching does not support dynamic networks!";
// check the batch dim: either 0th (and the original batch size of 1) or none
if (shape.size() && ov::DimensionTracker::get_label(shape[0])) {
if (shape[0] != 1)