* Added migration for deployment (#10800) * Added migration for deployment * Addressed comments * more info after the What's new Sessions' questions (#10803) * more info after the What's new Sessions' questions * generalizing the optimal_batch_size vs explicit value message * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Perf Hints docs and General Opt Guide refactoring (#10815) * Brushed the general optimization page * Opt GUIDE, WIP * perf hints doc placeholder * WIP * WIP2 * WIP 3 * added streams and few other details * fixed titles, misprints etc * Perf hints * movin the runtime optimizations intro * fixed link * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * some details on the FIL and other means when pure inference time is not the only factor * shuffled according to general->use-case->device-specifics flow, minor brushing * next iter * section on optimizing for tput and latency * couple of links to the features support matrix * Links, brushing, dedicated subsections for Latency/FIL/Tput * had to make the link less specific (otherwise docs compilations fails) * removing the Temp/Should be moved to the Opt Guide * shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins - openvino_docs_IE_DG_Model_caching_overview - openvino_docs_IE_DG_Int8Inference - openvino_docs_IE_DG_Bfloat16Inference - openvino_docs_OV_UG_NoDynamicShapes * fixed toc for ov_dynamic_shapes.md * referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors * fixed main product TOC, removed ref from the second-level items * reviewers remarks * reverted the openvino_docs_OV_UG_NoDynamicShapes * reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference * "No dynamic shapes" to the "Dynamic shapes" as TOC * removed duplication * minor brushing * Caching to the next level in TOC * brushing * more on the perf counters ( for latency and dynamic cases) Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Updated common IE pipeline infer-request section (#10844) * Updated common IE pipeline infer-reqest section * Update ov_infer_request.md * Apply suggestions from code review Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> * DOCS: Removed useless 4 spaces in snippets (#10870) * Updated snippets * Added link to encryption * [DOCS] ARM CPU plugin docs (#10885) * initial commit ARM_CPU.md added ARM CPU is added to the list of supported devices * Update the list of supported properties * Update Device_Plugins.md * Update CODEOWNERS * Removed quotes in limitations section * NVIDIA and Android are added to the list of supported devices * Added See Also section and reg sign to arm * Added Preprocessing acceleration section * Update the list of supported layers * updated list of supported layers * fix typos * Added support disclaimer * update trade and reg symbols * fixed typos * fix typos * reg fix * add reg symbol back Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com> * Try to fix visualization (#10896) * Try to fix visualization * New try * Update Install&Deployment for migration guide to 22/1 (#10933) * updates * update * Getting started improvements (#10948) * Onnx updates (#10962) * onnx changes * onnx updates * onnx updates * fix broken anchors api reference (#10976) * add ote repo (#10979) * DOCS: Increase content width (#10995) * fixes * fix * Fixed compilation Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Aleksandr Voron <aleksandr.voron@intel.com> Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com> Co-authored-by: Ilya Churaev <ilya.churaev@intel.com> Co-authored-by: Yuan Xu <yuan1.xu@intel.com> Co-authored-by: Victoria Yashina <victoria.yashina@intel.com> Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
7.7 KiB
Integrate OpenVINO™ with Your Application
@sphinxdirective
.. toctree:: :maxdepth: 1 :hidden:
openvino_docs_OV_Runtime_UG_Model_Representation openvino_docs_OV_Runtime_UG_Infer_request
@endsphinxdirective
Note
: Before start using OpenVINO™ Runtime, make sure you set all environment variables during the installation. If you did not, follow the instructions from the Set the Environment Variables section in the installation guides:
- For Windows* 10
- For Linux*
- For macOS*
- To build an open source version, use the OpenVINO™ Runtime Build Instructions.
Use OpenVINO™ Runtime API to Implement Inference Pipeline
This section provides step-by-step instructions to implement a typical inference pipeline with the OpenVINO™ Runtime C++ API:
Step 1. Create OpenVINO™ Runtime Core
Include next files to work with OpenVINO™ Runtime:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [include]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [import]
@endsphinxdirective
Use the following code to create OpenVINO™ Core to manage available devices and read model objects:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part1]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part1]
@endsphinxdirective
Step 2. Compile the Model
ov::CompiledModel class represents a device specific compiled model. ov::CompiledModel allows you to get information inputs or output ports by a tensor name or index.
Compile the model for a specific device using ov::Core::compile_model():
@sphinxdirective
.. tab:: C++
.. tab:: IR
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_1]
.. tab:: ONNX
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_2]
.. tab:: PaddlePaddle
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_3]
.. tab:: ov::Model
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part2_4]
.. tab:: Python
.. tab:: IR
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_1]
.. tab:: ONNX
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_2]
.. tab:: PaddlePaddle
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_3]
.. tab:: ov::Model
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part2_4]
@endsphinxdirective
The ov::Model object represents any models inside the OpenVINO™ Runtime.
For more details please read article about OpenVINO™ Model representation.
The code above creates a compiled model associated with a single hardware device from the model object. It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware resources). To learn how to change the device configuration, read the Query device properties article.
Step 3. Create an Inference Request
ov::InferRequest class provides methods for model inference in OpenVINO™ Runtime. Create an infer request using the following code (see InferRequest detailed documentation for more details):
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part3]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part3]
@endsphinxdirective
Step 4. Set Inputs
You can use external memory to create ov::Tensor and use the ov::InferRequest::set_input_tensor method to put this tensor on the device:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part4]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part4]
@endsphinxdirective
Step 5. Start Inference
OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application's overall frame-rate, because rather than wait for inference to complete, the app can keep working on the host, while the accelerator is busy. You can use ov::InferRequest::start_async to start model inference in the asynchronous mode and call ov::InferRequest::wait to wait for the inference results:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part5]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part5]
@endsphinxdirective
This section demonstrates a simple pipeline, to get more information about other ways to perform inference, read the dedicated "Run inference" section.
Step 6. Process the Inference Results
Go over the output tensors and process the inference results.
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/src/main.cpp
:language: cpp
:fragment: [part6]
.. tab:: Python
.. doxygensnippet:: docs/snippets/src/main.py
:language: python
:fragment: [part6]
@endsphinxdirective
Link and Build Your C++ Application with OpenVINO™ Runtime
The example uses CMake for project configuration.
-
Create a structure for the project:
project/ ├── CMakeLists.txt - CMake file to build ├── ... - Additional folders like includes/ └── src/ - source folder └── main.cpp build/ - build directory ... -
Include OpenVINO™ Runtime libraries in
project/CMakeLists.txt@snippet snippets/CMakeLists.txt cmake:integration_example
To build your project using CMake with the default build tools currently available on your machine, execute the following commands:
Note
: Make sure you set environment variables first by running
<INSTALL_DIR>/setupvars.sh(orsetupvars.batfor Windows). Otherwise theOpenVINO_DIRvariable won't be configured properly to passfind_packagecalls.
cd build/
cmake ../project
cmake --build .
It's allowed to specify additional build options (e.g. to build CMake project on Windows with a specific build tools). Please refer to the CMake page for details.
Run Your Application
Congratulations, you have made your first application with OpenVINO™ toolkit, now you may run it.