* Changed C++ OpenVINO Runtime User Guide integration * Remove IE from C++ guide * Fixed comments * Additional fix * Fixed some comments * Some new documents * Fixed some comments * Added Python snippets * Added sphinx tabs * Removed tabs * Removed group-tab * Added additional lines * Fixed typo * Fixed comments and build * Try to fix complex tabs * Fixed some typos * Added python code for model representation * Added more python code * Added serialize/visualize python examples * Simplify integration pipeline * Fixed typo * Try to fix tabs * Extend CompiledModel guide * Resolve merge conflict * Added separate infer request guide * Fixed build * Added cancel infer request method * Update docs/snippets/ov_model_snippets.py Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com> * Fixed comments * Fixed typo * Extend visualize pass * Fixed comments * Fixed build * Fixed typo * Update docs/snippets/ov_infer_request.py Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com> * Update docs/snippets/ov_infer_request.py Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/integrate_with_your_application.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Update docs/OV_Runtime_UG/model_representation.md Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * Fixed comments * Fixed doc * Fixed merge Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com> Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
8.2 KiB
Integrate OpenVINO™ with Your Application
@sphinxdirective
.. toctree:: :maxdepth: 1 :hidden:
openvino_docs_OV_Runtime_UG_Model_Representation openvino_docs_OV_Runtime_UG_Infer_request
@endsphinxdirective
Note
: Before start using OpenVINO™ Runtime, make sure you set all environment variables during the installation. If you did not, follow the instructions from the Set the Environment Variables section in the installation guides:
- For Windows* 10
- For Linux*
- For macOS*
- To build an open source version, use the OpenVINO™ Runtime Build Instructions.
Use OpenVINO™ Runtime API to Implement Inference Pipeline
This section provides step-by-step instructions to implement a typical inference pipeline with the OpenVINO™ Runtime C++ API:
Step 1. Create OpenVINO™ Runtime Core
Include next files to work with OpenVINO™ Runtime:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [include]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [import]
@endsphinxdirective
Use the following code to create OpenVINO™ Core to manage available devices and read model objects:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part1]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part1]
@endsphinxdirective
Step 2. Compile the Model
ov::CompiledModel class represents a device specific compiled model. ov::CompiledModel allows you to get information inputs or output ports by a tensor name or index.
Compile the model for a specific device using ov::Core::compile_model():
@sphinxdirective
.. tab:: C++
.. tab:: IR
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_1]
.. tab:: ONNX
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_2]
.. tab:: PaddlePaddle
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_3]
.. tab:: ov::Model
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_4]
.. tab:: Python
.. tab:: IR
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_1]
.. tab:: ONNX
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_2]
.. tab:: PaddlePaddle
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_3]
.. tab:: ov::Model
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_4]
@endsphinxdirective
The ov::Model object represents any models inside the OpenVINO™ Runtime.
For more details please read article about OpenVINO™ Model representation.
The code above creates a compiled model associated with a single hardware device from the model object. It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware resources). To learn how to change the device configuration, read the Query device properties article.
Step 3. Create an Inference Request
ov::InferRequest class provides methods for model inference in the OpenVINO™ Runtime.
This section demonstrates a simple pipeline, to get more information about other use cases, read the InferRequest documentation dedicated article.
Create an infer request using the following code:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part3]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part3]
@endsphinxdirective
Step 4. Set Inputs
You can use external memory to create ov::Tensor and use the ov::InferRequest::set_input_tensor method to put this tensor on the device:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part4]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part4]
@endsphinxdirective
Step 5. Start Inference
OpenVINO™ Runtime supports inference in asynchronous or synchronous mode. Async API usage can improve overall frame-rate of the application, because rather than wait for inference to complete, the app can continue doing things on the host, while the accelerator is busy. You can use ov::InferRequest::start_async() to start model inference in the asynchronous mode and call ov::InferRequest::wait() to wait for the inference results:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part5]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part5]
@endsphinxdirective
The asynchronous mode supports two methods to get the inference results:
ov::InferRequest::wait_for()- Waits until the specified timeout (in milliseconds) has elapsed or the inference result becomes available, whichever comes first.ov::InferRequest::wait()- Waits until the inference result becomes available.
Both requests are thread-safe, which means they can be called from different threads without exposing erroneous behavior or producing unpredictable results.
While the request is ongoing, all its methods except ov::InferRequest::cancel, ov::InferRequest::wait or ov::InferRequest::wait_for throw
the ov::Busy exception indicating the request is busy with computations.
Step 6. Process the Inference Results
Go over the output tensors and process the inference results.
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part6]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part6]
@endsphinxdirective
Link and Build Your C++ Application with OpenVINO™ Runtime
The example uses CMake for project configuration.
-
Create a structure for the project:
project/ ├── CMakeLists.txt - CMake file to build ├── ... - Additional folders like includes/ └── src/ - source folder └── main.cpp build/ - build directory ... -
Include OpenVINO™ Runtime libraries in
project/CMakeLists.txt@snippet snippets/CMakeLists.txt cmake:integration_example
To build your project using CMake with the default build tools currently available on your machine, execute the following commands:
Note
: Make sure you set environment variables first by running
<INSTALL_DIR>/setupvars.sh(orsetupvars.batfor Windows). Otherwise theOpenVINO_DIRvariable won't be configured properly to passfind_packagecalls.
cd build/
cmake ../project
cmake --build .
It's allowed to specify additional build options (e.g. to build CMake project on Windows with a specific build tools). Please refer to the CMake page for details.
Run Your Application
Congratulations, you have made your first application with OpenVINO™ toolkit, now you may run it.