Added more information about tensor names (#11070)
* Added more information about tensor names * Fixed comment and added documentation for extensions * Fixed code style * Fixed typo
This commit is contained in:
@@ -63,7 +63,7 @@ Use the following code to create OpenVINO™ Core to manage available devices an
|
||||
|
||||
### Step 2. Compile the Model
|
||||
|
||||
`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index.
|
||||
`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index, this approach is aligned with the majority of frameworks.
|
||||
|
||||
Compile the model for a specific device using `ov::Core::compile_model()`:
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
Usually to inference model with the OpenVINO™ Runtime an user needs to do the following steps in the application pipeline:
|
||||
- 1. Create Core object
|
||||
- 1.1. (Optional) Load extensions
|
||||
- 2. Read model from the disk
|
||||
- 2.1. (Optional) Model preprocessing
|
||||
- 3. Load the model to the device
|
||||
@@ -22,6 +23,18 @@ OpenVINO™ Runtime API 2.0:
|
||||
|
||||
@snippet docs/snippets/ov_common.cpp ov_api_2_0:create_core
|
||||
|
||||
### 1.1 (Optional) Load extensions
|
||||
|
||||
To load model with custom operation, you need to add extensions for these operations. We highly recommend to use [OpenVINO Extensibility API](../../Extensibility_UG/Intro.md) to write extensions, but if you already have old extensions you can load it to new OpenVINO™ Runtime:
|
||||
|
||||
Inference Engine API:
|
||||
|
||||
@snippet docs/snippets/ie_common.cpp ie:load_old_extension
|
||||
|
||||
OpenVINO™ Runtime API 2.0:
|
||||
|
||||
@snippet docs/snippets/ov_common.cpp ov_api_2_0:load_old_extension
|
||||
|
||||
## 2. Read model from the disk
|
||||
|
||||
Inference Engine API:
|
||||
@@ -225,4 +238,4 @@ OpenVINO™ Runtime API 2.0 processes outputs:
|
||||
:language: cpp
|
||||
:fragment: [ov_api_2_0:get_output_tensor_aligned]
|
||||
|
||||
@endsphinxdirective
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -10,7 +10,11 @@ Each operation in `ov::Model` has the `std::shared_ptr<ov::Node>` type.
|
||||
|
||||
For details on how to build a model in OpenVINO™ Runtime, see the [Build a Model in OpenVINO™ Runtime](@ref build_model) section.
|
||||
|
||||
OpenVINO™ Runtime allows using tensor names or indexes to work wit model inputs/outputs. To get model input/output ports, use the `ov::Model::inputs()` or `ov::Model::outputs()` methods respectively.
|
||||
OpenVINO™ Runtime allows to use different approaches to work with model inputs/outputs:
|
||||
- `ov::Model::inputs()`/`ov::Model::outputs()` methods allow to get vector of all input/output ports.
|
||||
- For a model which has only one input or output you can use methods `ov::Model::input()` or `ov::Model::output()` without arguments to get input or output port respectively.
|
||||
- Methods `ov::Model::input()` and `ov::Model::output()` can be used with index of input or output from the framework model to get specific port by index.
|
||||
- You can use tensor name of input or output from the original framework model together with methods `ov::Model::input()` or `ov::Model::output()` to get specific port. It means that you don't need to have any additional mapping of names from framework to OpenVINO, as it was before, OpenVINO™ Runtime allows using of native framework tensor names.
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
||||
@@ -96,5 +96,8 @@ int main() {
|
||||
// process output data
|
||||
}
|
||||
//! [ie:get_output_tensor]
|
||||
//! [ie:load_old_extension]
|
||||
core.AddExtension(std::make_shared<InferenceEngine::Extension>("path_to_extension_library.so"));
|
||||
//! [ie:load_old_extension]
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Copyright (C) 2018-2021 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
#include <ie_extension.h>
|
||||
|
||||
#include <openvino/core/core.hpp>
|
||||
#include <openvino/runtime/runtime.hpp>
|
||||
|
||||
@@ -84,7 +86,7 @@ int main() {
|
||||
// which restarts inference inside one more time, so two inferences happen here
|
||||
|
||||
auto restart_once = true;
|
||||
infer_request.set_callback([&, restart_once] (std::exception_ptr exception_ptr) mutable {
|
||||
infer_request.set_callback([&, restart_once](std::exception_ptr exception_ptr) mutable {
|
||||
if (exception_ptr) {
|
||||
// procces exception or rethrow it.
|
||||
std::rethrow_exception(exception_ptr);
|
||||
@@ -110,5 +112,11 @@ int main() {
|
||||
|
||||
outputs_aligned(infer_request);
|
||||
|
||||
OPENVINO_SUPPRESS_DEPRECATED_START
|
||||
//! [ov_api_2_0:load_old_extension]
|
||||
core.add_extension(std::make_shared<InferenceEngine::Extension>("path_to_extension_library.so"));
|
||||
//! [ov_api_2_0:load_old_extension]
|
||||
OPENVINO_SUPPRESS_DEPRECATED_END
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user