diff --git a/docs/OV_Runtime_UG/integrate_with_your_application.md b/docs/OV_Runtime_UG/integrate_with_your_application.md index 0cc9e630f3c..6bc3644da58 100644 --- a/docs/OV_Runtime_UG/integrate_with_your_application.md +++ b/docs/OV_Runtime_UG/integrate_with_your_application.md @@ -63,7 +63,7 @@ Use the following code to create OpenVINO™ Core to manage available devices an ### Step 2. Compile the Model -`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index. +`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index, this approach is aligned with the majority of frameworks. Compile the model for a specific device using `ov::Core::compile_model()`: diff --git a/docs/OV_Runtime_UG/migration_ov_2_0/common_inference_pipeline.md b/docs/OV_Runtime_UG/migration_ov_2_0/common_inference_pipeline.md index f349c67d570..36780fa507d 100644 --- a/docs/OV_Runtime_UG/migration_ov_2_0/common_inference_pipeline.md +++ b/docs/OV_Runtime_UG/migration_ov_2_0/common_inference_pipeline.md @@ -2,6 +2,7 @@ Usually to inference model with the OpenVINO™ Runtime an user needs to do the following steps in the application pipeline: - 1. Create Core object + - 1.1. (Optional) Load extensions - 2. Read model from the disk - 2.1. (Optional) Model preprocessing - 3. Load the model to the device @@ -22,6 +23,18 @@ OpenVINO™ Runtime API 2.0: @snippet docs/snippets/ov_common.cpp ov_api_2_0:create_core +### 1.1 (Optional) Load extensions + +To load model with custom operation, you need to add extensions for these operations. We highly recommend to use [OpenVINO Extensibility API](../../Extensibility_UG/Intro.md) to write extensions, but if you already have old extensions you can load it to new OpenVINO™ Runtime: + +Inference Engine API: + +@snippet docs/snippets/ie_common.cpp ie:load_old_extension + +OpenVINO™ Runtime API 2.0: + +@snippet docs/snippets/ov_common.cpp ov_api_2_0:load_old_extension + ## 2. Read model from the disk Inference Engine API: @@ -225,4 +238,4 @@ OpenVINO™ Runtime API 2.0 processes outputs: :language: cpp :fragment: [ov_api_2_0:get_output_tensor_aligned] -@endsphinxdirective \ No newline at end of file +@endsphinxdirective diff --git a/docs/OV_Runtime_UG/model_representation.md b/docs/OV_Runtime_UG/model_representation.md index e7288f56b9e..4e49108f2fe 100644 --- a/docs/OV_Runtime_UG/model_representation.md +++ b/docs/OV_Runtime_UG/model_representation.md @@ -10,7 +10,11 @@ Each operation in `ov::Model` has the `std::shared_ptr` type. For details on how to build a model in OpenVINO™ Runtime, see the [Build a Model in OpenVINO™ Runtime](@ref build_model) section. -OpenVINO™ Runtime allows using tensor names or indexes to work wit model inputs/outputs. To get model input/output ports, use the `ov::Model::inputs()` or `ov::Model::outputs()` methods respectively. +OpenVINO™ Runtime allows to use different approaches to work with model inputs/outputs: + - `ov::Model::inputs()`/`ov::Model::outputs()` methods allow to get vector of all input/output ports. + - For a model which has only one input or output you can use methods `ov::Model::input()` or `ov::Model::output()` without arguments to get input or output port respectively. + - Methods `ov::Model::input()` and `ov::Model::output()` can be used with index of input or output from the framework model to get specific port by index. + - You can use tensor name of input or output from the original framework model together with methods `ov::Model::input()` or `ov::Model::output()` to get specific port. It means that you don't need to have any additional mapping of names from framework to OpenVINO, as it was before, OpenVINO™ Runtime allows using of native framework tensor names. @sphinxdirective diff --git a/docs/snippets/ie_common.cpp b/docs/snippets/ie_common.cpp index a594e6e59de..decf326217f 100644 --- a/docs/snippets/ie_common.cpp +++ b/docs/snippets/ie_common.cpp @@ -96,5 +96,8 @@ int main() { // process output data } //! [ie:get_output_tensor] + //! [ie:load_old_extension] + core.AddExtension(std::make_shared("path_to_extension_library.so")); + //! [ie:load_old_extension] return 0; -} \ No newline at end of file +} diff --git a/docs/snippets/ov_common.cpp b/docs/snippets/ov_common.cpp index f42cb78e646..9e56a71f30a 100644 --- a/docs/snippets/ov_common.cpp +++ b/docs/snippets/ov_common.cpp @@ -1,6 +1,8 @@ // Copyright (C) 2018-2021 Intel Corporation // SPDX-License-Identifier: Apache-2.0 // +#include + #include #include @@ -84,7 +86,7 @@ int main() { // which restarts inference inside one more time, so two inferences happen here auto restart_once = true; - infer_request.set_callback([&, restart_once] (std::exception_ptr exception_ptr) mutable { + infer_request.set_callback([&, restart_once](std::exception_ptr exception_ptr) mutable { if (exception_ptr) { // procces exception or rethrow it. std::rethrow_exception(exception_ptr); @@ -110,5 +112,11 @@ int main() { outputs_aligned(infer_request); + OPENVINO_SUPPRESS_DEPRECATED_START + //! [ov_api_2_0:load_old_extension] + core.add_extension(std::make_shared("path_to_extension_library.so")); + //! [ov_api_2_0:load_old_extension] + OPENVINO_SUPPRESS_DEPRECATED_END + return 0; -} \ No newline at end of file +}