* Introduce new API * Fixed comments * Fixed code style * Introduce some new API * Fixed code style * Reverted names * Exend OVExecNetwork tests * Changed executable network API * Fixed tests * Fixed comments * Fixed style * More tests * Applied changes * Fixed windows build * Fixed typo * Fixed some tests * Fixed code style * Fixed pre-proc tests * Fixed documentation comments * Added more comments * Fixed code style * Remove friend * Removed find tensor API from internal classes * Avoid old API in new constructors * Fixed CPU tests * Fixed comments * Fixed template tests * Extended test cases * Revert data creation * Fixed GNA and GPU plugins * Fixed comments * Fixed GNA tests * Try to fix tests for GPU * Fixed GNA tests * Fixed GNA * Try to fix Myriad tests * Removed check to see an exception message * Fixed HETERO import * Reverted mkl-dnn submodule * Fixed clang-format Co-authored-by: y <ilya.lavrenov@intel.com>
70 lines
2.4 KiB
C++
70 lines
2.4 KiB
C++
// Copyright (C) 2018-2021 Intel Corporation
|
|
// SPDX-License-Identifier: Apache-2.0
|
|
//
|
|
|
|
#pragma once
|
|
|
|
#include <array>
|
|
#include <chrono>
|
|
#include <cpp_interfaces/interface/ie_iinfer_request_internal.hpp>
|
|
#include <executable.hpp>
|
|
#include <ie_input_info.hpp>
|
|
#include <map>
|
|
#include <memory>
|
|
#include <ngraph/runtime/tensor.hpp>
|
|
#include <openvino/itt.hpp>
|
|
#include <string>
|
|
#include <vector>
|
|
|
|
namespace TemplatePlugin {
|
|
|
|
// forward declaration
|
|
class ExecutableNetwork;
|
|
|
|
// ! [infer_request:header]
|
|
class TemplateInferRequest : public InferenceEngine::IInferRequestInternal {
|
|
public:
|
|
typedef std::shared_ptr<TemplateInferRequest> Ptr;
|
|
|
|
TemplateInferRequest(const InferenceEngine::InputsDataMap& networkInputs,
|
|
const InferenceEngine::OutputsDataMap& networkOutputs,
|
|
const std::shared_ptr<ExecutableNetwork>& executableNetwork);
|
|
TemplateInferRequest(const std::vector<std::shared_ptr<const ov::Node>>& inputs,
|
|
const std::vector<std::shared_ptr<const ov::Node>>& outputs,
|
|
const std::shared_ptr<ExecutableNetwork>& executableNetwork);
|
|
~TemplateInferRequest();
|
|
|
|
void InferImpl() override;
|
|
std::map<std::string, InferenceEngine::InferenceEngineProfileInfo> GetPerformanceCounts() const override;
|
|
|
|
// pipeline methods-stages which are used in async infer request implementation and assigned to particular executor
|
|
void inferPreprocess();
|
|
void startPipeline();
|
|
void waitPipeline();
|
|
void inferPostprocess();
|
|
|
|
InferenceEngine::Blob::Ptr GetBlob(const std::string& name) override;
|
|
void SetBlob(const std::string& name, const InferenceEngine::Blob::Ptr& userBlob) override;
|
|
|
|
private:
|
|
void createInferRequest();
|
|
void allocateDeviceBuffers();
|
|
void allocateBlobs();
|
|
|
|
enum { Preprocess, Postprocess, StartPipeline, WaitPipeline, numOfStages };
|
|
|
|
std::shared_ptr<ExecutableNetwork> _executableNetwork;
|
|
std::array<openvino::itt::handle_t, numOfStages> _profilingTask;
|
|
// for performance counters
|
|
std::array<std::chrono::duration<float, std::micro>, numOfStages> _durations;
|
|
|
|
InferenceEngine::BlobMap _networkOutputBlobs;
|
|
|
|
std::vector<std::shared_ptr<ngraph::runtime::Tensor>> _inputTensors;
|
|
std::vector<std::shared_ptr<ngraph::runtime::Tensor>> _outputTensors;
|
|
std::shared_ptr<ngraph::runtime::Executable> _executable;
|
|
};
|
|
// ! [infer_request:header]
|
|
|
|
} // namespace TemplatePlugin
|