* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs * Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP * Removed OPENVINO_FRAMEWORK_MAP reference * Frontend extension detailed documentation * Fixed distributed snippets * Fixed snippet inclusion in FE extension document and chapter headers * Fixed wrong name in a snippet reference * Fixed test for template extension due to changed number of loaded extensions * Update docs/Extensibility_UG/frontend_extensions.md Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> * Minor fixes in extension snippets * Small grammar fix Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> * DOCS: transition banner (#10973) * transition banner * minor fix * update transition banner * updates * update custom.js * updates * updates * Documentation fixes (#11044) * Benchmark app usage * Fixed link to the devices * More fixes * Update docs/OV_Runtime_UG/multi_device.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Removed several hardcoded links Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Updated documentation for compile_tool (#11049) * Added deployment guide (#11060) * Added deployment guide * Added local distribution * Updates * Fixed more indentations * Removed obsolete code snippets (#11061) * Removed obsolete code snippets * NCC style * Fixed NCC for BA * Add a troubleshooting issue for PRC installation (#11074) * updates * adding gna to linux * add missing reference * update * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * update * minor updates * add gna item to yum and apt * add gna to get started page * update reference formatting * merge commit * add a troubleshooting issue * update * update * fix CVS-71846 Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * DOCS: fixed hardcoded links (#11100) * Fixes * Use links * applying reviewers comments to the Opt Guide (#11093) * applying reviewrs comments * fixed refs, more structuring (bold, bullets, etc) * refactoring tput/latency sections * next iteration (mostly latency), also brushed the auto-batching and other sections * updates sync/async images * common opts brushed * WIP tput redesigned * minor brushing of common and auto-batching * Tput fully refactored * fixed doc name in the link * moved int8 perf counters to the right section * fixed links * fixed broken quotes * fixed more links * add ref to the internals to the TOC * Added a note on the batch size Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * [80085] New images for docs (#11114) * change doc structure * fix manager tools * fix manager tools 3 step * fix manager tools 3 step * new img * new img for OV Runtime * fix steps * steps * fix intendents * change list * fix space * fix space * code snippets fix * change display * Benchmarks 2022 1 (#11130) * Minor fixes * Updates for 2022.1 * Edits according to the review * Edits according to review comments * Edits according to review comments * Edits according to review comments * Fixed table * Edits according to review comments * Removed config for Intel® Core™ i7-11850HE * Removed forward-tacotron-duration-prediction-241 graph * Added resnet-18-pytorch * Add info about Docker images in Deployment guide (#11136) * Renamed user guides (#11137) * fix screenshot (#11140) * More conservative recommendations on dynamic shapes usage in docs (#11161) * More conservative recommendations about using dynamic shapes * Duplicated statement from C++ part to Python part of reshape doc (no semantical changes) * Update ShapeInference.md (#11168) * Benchmarks 2022 1 updates (#11180) * Updated graphs * Quick fix for TODO in Dynamic Shapes article * Anchor link fixes * Fixed DM config (#11199) * DOCS: doxy sphinxtabs (#11027) * initial implementation of doxy sphinxtabs * fixes * fixes * fixes * fixes * fixes * WA for ignored visibility attribute * Fixes Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com> Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com> Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> Co-authored-by: Yuan Xu <yuan1.xu@intel.com> Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com> Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Ilya Naumov <ilya.naumov@intel.com> Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
69 lines
2.7 KiB
C++
69 lines
2.7 KiB
C++
// Copyright (C) 2018-2020 Intel Corporation
|
|
// SPDX-License-Identifier: Apache-2.0
|
|
//
|
|
|
|
#include <threading/ie_itask_executor.hpp>
|
|
#include <cpp_interfaces/impl/ie_infer_async_request_thread_safe_default.hpp>
|
|
#include <memory>
|
|
|
|
using namespace InferenceEngine;
|
|
|
|
class AcceleratorSyncRequest : public IInferRequestInternal {
|
|
public:
|
|
using Ptr = std::shared_ptr<AcceleratorSyncRequest>;
|
|
|
|
void preprocess();
|
|
void write_to_device();
|
|
void run_on_device();
|
|
void read_from_device();
|
|
void post_process();
|
|
};
|
|
|
|
// ! [async_infer_request:define_pipeline]
|
|
// Inherits from AsyncInferRequestThreadSafeDefault
|
|
class AcceleratorAsyncInferRequest : public AsyncInferRequestThreadSafeDefault {
|
|
// Store the pointer to the synchronous request and five executors
|
|
AcceleratorAsyncInferRequest(const AcceleratorSyncRequest::Ptr& syncRequest,
|
|
const ITaskExecutor::Ptr& preprocessExecutor,
|
|
const ITaskExecutor::Ptr& writeToDeviceExecutor,
|
|
const ITaskExecutor::Ptr& runOnDeviceExecutor,
|
|
const ITaskExecutor::Ptr& readFromDeviceExecutor,
|
|
const ITaskExecutor::Ptr& postProcessExecutor) :
|
|
AsyncInferRequestThreadSafeDefault(syncRequest, nullptr, nullptr),
|
|
_accSyncRequest{syncRequest},
|
|
_preprocessExecutor{preprocessExecutor},
|
|
_writeToDeviceExecutor{writeToDeviceExecutor},
|
|
_runOnDeviceExecutor{runOnDeviceExecutor},
|
|
_readFromDeviceExecutor{readFromDeviceExecutor},
|
|
_postProcessExecutor{postProcessExecutor}
|
|
{
|
|
// Five pipeline stages of synchronous infer request are run by different executors
|
|
_pipeline = {
|
|
{ _preprocessExecutor , [this] {
|
|
_accSyncRequest->preprocess();
|
|
}},
|
|
{ _writeToDeviceExecutor , [this] {
|
|
_accSyncRequest->write_to_device();
|
|
}},
|
|
{ _runOnDeviceExecutor , [this] {
|
|
_accSyncRequest->run_on_device();
|
|
}},
|
|
{ _readFromDeviceExecutor , [this] {
|
|
_accSyncRequest->read_from_device();
|
|
}},
|
|
{ _postProcessExecutor , [this] {
|
|
_accSyncRequest->post_process();
|
|
}},
|
|
};
|
|
}
|
|
|
|
// As all stages use _accSyncRequest member we should wait for all stages tasks before the destructor destroy this member.
|
|
~AcceleratorAsyncInferRequest() {
|
|
StopAndWait();
|
|
}
|
|
|
|
AcceleratorSyncRequest::Ptr _accSyncRequest;
|
|
ITaskExecutor::Ptr _preprocessExecutor, _writeToDeviceExecutor, _runOnDeviceExecutor, _readFromDeviceExecutor, _postProcessExecutor;
|
|
};
|
|
// ! [async_infer_request:define_pipeline]
|