* update Auto docs Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * update python snippets Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * remove vpu, fix a mistaken in python code Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * update MYRIAD device full name Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * update API name old API use name Inference Engine API NEW API usen name OpenVINO Runtime API 2.0 Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * update tab name, and code format Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * fix AUTO4 format issue Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * update set_property code Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * auto draft Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * mv code into .cpp and .py modify the devicelist part accoding to the review Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * remove priority list in code and document modify the begning of the document remove perfomance data remove old API use compile_model instead of set_property add a image about cpu accelerate Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * fix mis print and code is not match document Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * try to fix doc build issue Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com> * fix snippets code compile issue Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
31 lines
1.1 KiB
C++
31 lines
1.1 KiB
C++
#include <ie_core.hpp>
|
|
|
|
int main() {
|
|
{
|
|
//! [part1]
|
|
// Inference Engine API
|
|
InferenceEngine::Core ie;
|
|
|
|
// Read a network in IR, PaddlePaddle, or ONNX format:
|
|
InferenceEngine::CNNNetwork network = ie.ReadNetwork("sample.xml");
|
|
|
|
// Load a network to AUTO using the default list of device candidates.
|
|
// The following lines are equivalent:
|
|
InferenceEngine::ExecutableNetwork exec0 = ie.LoadNetwork(network);
|
|
InferenceEngine::ExecutableNetwork exec1 = ie.LoadNetwork(network, "AUTO");
|
|
InferenceEngine::ExecutableNetwork exec2 = ie.LoadNetwork(network, "AUTO", {});
|
|
|
|
// Optional
|
|
// You can also specify the devices to be used by AUTO in its selection process.
|
|
// The following lines are equivalent:
|
|
InferenceEngine::ExecutableNetwork exec3 = ie.LoadNetwork(network, "AUTO:GPU,CPU");
|
|
InferenceEngine::ExecutableNetwork exec4 = ie.LoadNetwork(network, "AUTO", {{"MULTI_DEVICE_PRIORITIES", "GPU,CPU"}});
|
|
|
|
// Optional
|
|
// the AUTO plugin is pre-configured (globally) with the explicit option:
|
|
ie.SetConfig({{"MULTI_DEVICE_PRIORITIES", "GPU,CPU"}}, "AUTO");
|
|
//! [part1]
|
|
}
|
|
return 0;
|
|
}
|