Delete the deprecated LowLatency (version1) transformation (#17965)

* Delete the deprecated LowLatency (version1) transformation

* detele LowLatency refs from the docs
This commit is contained in:
Ivan Tikhonov
2023-06-10 12:24:43 +04:00
committed by GitHub
parent cff083f83d
commit 74100670ac
16 changed files with 4 additions and 913 deletions

View File

@@ -15,47 +15,6 @@
namespace InferenceEngine {
/**
* @deprecated Use InferenceEngine::lowLatency2 instead. This transformation will be removed in 2023.1.
* @brief The transformation finds all TensorIterator layers in the network, processes all back
* edges that describe a connection between Result and Parameter of the TensorIterator body,
* and inserts ReadValue layer between Parameter and the next layers after this Parameter,
* and Assign layer after the layers before the Result layer.
* Supported platforms: CPU, GNA.
*
* The example below describes the changes to the inner part (body, back edges) of the TensorIterator layer.
* [] - TensorIterator body
* () - new layer
*
* before applying the transformation:
* back_edge_1 -> [Parameter -> some layers ... -> Result ] -> back_edge_1
*
* after applying the transformation:
* back_edge_1 -> [Parameter -> (ReadValue layer) -> some layers ... -> (Assign layer) ]
* \
* -> Result ] -> back_edge_1
*
* It is recommended to use this transformation in conjunction with the Reshape feature to set sequence
* dimension to 1 and with the UnrollTensorIterator transformation.
* For convenience, we have already enabled the unconditional execution of the UnrollTensorIterator
* transformation when using the LowLatency transformation for CPU, GNA plugins, no action is required here.
* After applying both of these transformations, the resulting network can be inferred step by
* step, the states will store between inferences.
*
* An illustrative example, not real API:
*
* network->reshape(...) // Set sequence dimension to 1, recalculating shapes. Optional, depends on the network.
* LowLatency(network) // Applying LowLatency and UnrollTensorIterator transformations.
* network->infer (...) // Calculating new values for states.
* // All states are stored between inferences via Assign, ReadValue layers.
* network->infer (...) // Using stored states, calculating new values for states.
*
* @param network A network to apply LowLatency transformation
*/
INFERENCE_ENGINE_DEPRECATED("This transformation will be removed in 2023.1. "
"Use InferenceEngine::lowLatency2 instead.")
INFERENCE_ENGINE_API_CPP(void) LowLatency(InferenceEngine::CNNNetwork& network);
/**
* @brief The transformation finds all TensorIterator/Loop layers in the network,
* processes all back edges that describe a connection between Result and Parameter

View File

@@ -9,15 +9,6 @@
using namespace InferenceEngine;
void InferenceEngine::LowLatency(InferenceEngine::CNNNetwork& network) {
auto function = network.getFunction();
ngraph::pass::Manager manager;
NGRAPH_SUPPRESS_DEPRECATED_START
manager.register_pass<ngraph::pass::LowLatency>();
NGRAPH_SUPPRESS_DEPRECATED_END
manager.run_passes(function);
}
void InferenceEngine::lowLatency2(InferenceEngine::CNNNetwork& network, bool use_const_initializer) {
auto function = network.getFunction();
ngraph::pass::Manager manager;