Files
openvino/docs/snippets/AUTO4.cpp
Yuan Hu 72e8661157 [Auto PLUGIN] update Auto docs (#10889)
* update Auto docs

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove vpu, fix a mistaken in python code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update MYRIAD device full name

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update API name

old API use name Inference Engine API
NEW API usen name OpenVINO Runtime API 2.0

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update tab name, and code format

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix AUTO4 format issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update set_property code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* auto draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* mv code into .cpp and .py

modify the devicelist part accoding to the review

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove priority list in code and document

modify the begning of the document
remove perfomance data
remove old API
use compile_model instead of set_property
add a image about cpu accelerate

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix mis print and code is not match document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try to fix doc build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix snippets code compile issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-19 18:25:35 +03:00

37 lines
1.4 KiB
C++

#include <openvino/openvino.hpp>
int main() {
ov::Core core;
// Read a network in IR, PaddlePaddle, or ONNX format:
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
{
//! [part4]
// Example 1
ov::CompiledModel compiled_model0 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::HIGH));
ov::CompiledModel compiled_model1 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::MEDIUM));
ov::CompiledModel compiled_model2 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
/************
Assume that all the devices (CPU, GPU, and MYRIAD) can support all the networks.
Result: compiled_model0 will use GPU, compiled_model1 will use MYRIAD, compiled_model2 will use CPU.
************/
// Example 2
ov::CompiledModel compiled_model3 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
ov::CompiledModel compiled_model4 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::MEDIUM));
ov::CompiledModel compiled_model5 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
/************
Assume that all the devices (CPU, GPU, and MYRIAD) can support all the networks.
Result: compiled_model3 will use GPU, compiled_model4 will use GPU, compiled_model5 will use MYRIAD.
************/
//! [part4]
}
return 0;
}