[DOCS] Tensorflow models support in 23.0 update (#15974)
* tensorflow support update adding tensorflow to main snippet Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
This commit is contained in:
parent
b1d0e152e3
commit
21ac61fef5
@ -12,7 +12,7 @@
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
||||
|
||||
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
|
||||
@ -20,7 +20,7 @@ Every deep learning workflow begins with obtaining a model. You can choose to pr
|
||||
|
||||
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||
|
||||
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
|
||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
|
||||
|
@ -53,13 +53,13 @@ You might prefer implementing a custom operation class if you already have a gen
|
||||
|
||||
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
|
||||
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), PaddlePaddle or TensorFlow formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
|
||||
|
||||
2. If a model is represented in the TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
|
||||
If you are implementing extensions for new ONNX or PaddlePaddle frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
If you are implementing extensions for new ONNX, PaddlePaddle or TensorFlow frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
|
||||
1. Implemented in C++ only.
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
|
||||
|
||||
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
|
||||
This API is applicable for new frontends only, which exist for ONNX, PaddlePaddle and TensorFlow. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
|
||||
|
||||
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
|
||||
|
||||
@ -20,7 +20,7 @@ This section covers the case when a single operation in framework representation
|
||||
|
||||
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
|
||||
|
||||
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
|
||||
> **NOTE**: `OpExtension` class is currently available for ONNX and TensorFlow frontends. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
|
||||
|
||||
The next example maps ONNX operation with type [“Identity”]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
|
||||
|
||||
|
@ -20,16 +20,15 @@
|
||||
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
|
||||
|
||||
**ONNX, PaddlePaddle** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX and PaddlePaddle, see how to [Integrate OpenVINO™ with Your Application](../../../OV_Runtime_UG/integrate_with_your_application.md).
|
||||
**ONNX, PaddlePaddle, TensorFlow** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to [Integrate OpenVINO™ with Your Application](../../../OV_Runtime_UG/integrate_with_your_application.md).
|
||||
|
||||
**TensorFlow, PyTorch, MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
|
||||
**MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
|
||||
|
||||
Refer to the following articles for details on conversion for different formats and models:
|
||||
|
||||
* [How to convert ONNX](./Convert_Model_From_ONNX.md)
|
||||
* [How to convert PaddlePaddle](./Convert_Model_From_Paddle.md)
|
||||
* [How to convert TensorFlow](./Convert_Model_From_TensorFlow.md)
|
||||
* [How to convert PyTorch](./Convert_Model_From_PyTorch.md)
|
||||
* [How to convert MXNet](./Convert_Model_From_MxNet.md)
|
||||
* [How to convert Caffe](./Convert_Model_From_Caffe.md)
|
||||
* [How to convert Kaldi](./Convert_Model_From_Kaldi.md)
|
||||
|
@ -48,8 +48,7 @@ The granularity of OpenVINO packages may vary for different distribution types.
|
||||
|
||||
- The main library `openvino` is used by users' C++ applications to link against with. The library provides all OpenVINO Runtime public APIs, including both API 2.0 and the previous Inference Engine and nGraph APIs. For C language applications, `openvino_c` is additionally required for distribution.
|
||||
- The "optional" plugin libraries like `openvino_intel_cpu_plugin` (matching the `openvino_.+_plugin` pattern) are used to provide inference capabilities on specific devices or additional capabilities like [Hetero Execution](../hetero_execution.md) and [Multi-Device Execution](../multi_device.md).
|
||||
- The "optional" plugin libraries like `openvino_ir_frontend` (matching `openvino_.+_frontend`) are used to provide capabilities to read models of different file formats such as OpenVINO IR,
|
||||
TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../../resources/tensorflow_frontend.md)), ONNX, and PaddlePaddle.
|
||||
- The "optional" plugin libraries like `openvino_ir_frontend` (matching `openvino_.+_frontend`) are used to provide capabilities to read models of different file formats such as OpenVINO IR, TensorFlow, ONNX, and PaddlePaddle.
|
||||
|
||||
Here the term "optional" means that if the application does not use the capability enabled by the plugin, the plugin library or a package with the plugin is not needed in the final distribution.
|
||||
|
||||
|
@ -113,7 +113,7 @@ The `HETERO`, `MULTI`, `BATCH` and `AUTO` execution modes can also be used expli
|
||||
|
||||
OpenVINO Runtime uses frontend libraries dynamically to read models in different formats:
|
||||
- `openvino_ir_frontend` is used to read OpenVINO IR.
|
||||
- `openvino_tensorflow_frontend` is used to read TensorFlow file format. Check [TensorFlow Frontend Capabilities and Limitations](../../resources/tensorflow_frontend.md).
|
||||
- `openvino_tensorflow_frontend` is used to read TensorFlow file format.
|
||||
- `openvino_onnx_frontend` is used to read ONNX file format.
|
||||
- `openvino_paddle_frontend` is used to read Paddle file format.
|
||||
|
||||
|
@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b08a4dce7e3e9e38107540e78ba0b584b27c65ed5dba89d37ee365210222d710
|
||||
size 54897
|
||||
oid sha256:ccc7704d2a27f7491729767443f3d2bdd0ccc930f16fde631a7f9c67d158297a
|
||||
size 71369
|
||||
|
@ -75,105 +75,106 @@ Use the following code to create OpenVINO™ Core to manage available devices an
|
||||
|
||||
Compile the model for a specific device using `ov::Core::compile_model()`:
|
||||
|
||||
@sphinxtabset
|
||||
@sphinxdirective
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab:: C++
|
||||
|
||||
@sphinxtabset
|
||||
.. tab:: IR
|
||||
|
||||
@sphinxtab{IR}
|
||||
.. doxygensnippet:: docs/snippets/src/main.cpp
|
||||
:language: cpp
|
||||
:fragment: [part2_1]
|
||||
|
||||
@snippet docs/snippets/src/main.cpp part2_1
|
||||
.. tab:: ONNX
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/src/main.cpp
|
||||
:language: cpp
|
||||
:fragment: [part2_2]
|
||||
|
||||
@sphinxtab{ONNX}
|
||||
.. tab:: PaddlePaddle
|
||||
|
||||
@snippet docs/snippets/src/main.cpp part2_2
|
||||
.. doxygensnippet:: docs/snippets/src/main.cpp
|
||||
:language: cpp
|
||||
:fragment: [part2_3]
|
||||
|
||||
@endsphinxtab
|
||||
.. tab:: TensorFlow
|
||||
|
||||
@sphinxtab{PaddlePaddle}
|
||||
.. doxygensnippet:: docs/snippets/src/main.cpp
|
||||
:language: cpp
|
||||
:fragment: [part2_4]
|
||||
|
||||
@snippet docs/snippets/src/main.cpp part2_3
|
||||
.. tab:: ov::Model
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/src/main.cpp
|
||||
:language: cpp
|
||||
:fragment: [part2_5]
|
||||
|
||||
@sphinxtab{ov::Model}
|
||||
.. tab:: Python
|
||||
|
||||
@snippet docs/snippets/src/main.cpp part2_4
|
||||
.. tab:: IR
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/src/main.py
|
||||
:language: python
|
||||
:fragment: [part2_1]
|
||||
|
||||
@endsphinxtabset
|
||||
.. tab:: ONNX
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/src/main.py
|
||||
:language: python
|
||||
:fragment: [part2_2]
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab:: PaddlePaddle
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/src/main.py
|
||||
:language: python
|
||||
:fragment: [part2_3]
|
||||
|
||||
@sphinxtab{IR}
|
||||
.. tab:: TensorFlow
|
||||
|
||||
@snippet docs/snippets/src/main.py part2_1
|
||||
.. doxygensnippet:: docs/snippets/src/main.py
|
||||
:language: python
|
||||
:fragment: [part2_4]
|
||||
|
||||
@endsphinxtab
|
||||
.. tab:: ov::Model
|
||||
|
||||
@sphinxtab{ONNX}
|
||||
.. doxygensnippet:: docs/snippets/src/main.py
|
||||
:language: python
|
||||
:fragment: [part2_5]
|
||||
|
||||
@snippet docs/snippets/src/main.py part2_2
|
||||
.. tab:: C
|
||||
|
||||
@endsphinxtab
|
||||
.. tab:: IR
|
||||
|
||||
@sphinxtab{PaddlePaddle}
|
||||
.. doxygensnippet:: docs/snippets/src/main.c
|
||||
:language: cpp
|
||||
:fragment: [part2_1]
|
||||
|
||||
@snippet docs/snippets/src/main.py part2_3
|
||||
.. tab:: ONNX
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/src/main.c
|
||||
:language: cpp
|
||||
:fragment: [part2_2]
|
||||
|
||||
@sphinxtab{ov::Model}
|
||||
.. tab:: PaddlePaddle
|
||||
|
||||
@snippet docs/snippets/src/main.py part2_4
|
||||
.. doxygensnippet:: docs/snippets/src/main.c
|
||||
:language: cpp
|
||||
:fragment: [part2_3]
|
||||
|
||||
@endsphinxtab
|
||||
.. tab:: TensorFlow
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/src/main.c
|
||||
:language: cpp
|
||||
:fragment: [part2_4]
|
||||
|
||||
@endsphinxtab
|
||||
.. tab:: ov::Model
|
||||
|
||||
@sphinxtab{C}
|
||||
.. doxygensnippet:: docs/snippets/src/main.c
|
||||
:language: cpp
|
||||
:fragment: [part2_5]
|
||||
|
||||
@sphinxtabset
|
||||
@endsphinxdirective
|
||||
|
||||
@sphinxtab{IR}
|
||||
|
||||
@snippet docs/snippets/src/main.c part2_1
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{ONNX}
|
||||
|
||||
@snippet docs/snippets/src/main.c part2_2
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{PaddlePaddle}
|
||||
|
||||
@snippet docs/snippets/src/main.c part2_3
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{ov::Model}
|
||||
|
||||
@snippet docs/snippets/src/main.c part2_4
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
The `ov::Model` object represents any models inside the OpenVINO™ Runtime.
|
||||
For more details please read article about [OpenVINO™ Model representation](model_representation.md).
|
||||
|
@ -592,7 +592,7 @@ The Inference Engine API processes outputs as they are of the `I32` precision (*
|
||||
API 2.0 processes outputs as they are of:
|
||||
|
||||
- the `I32` precision (**not** aligned with the original model) for OpenVINO IR v10 models, to match the [old behavior](@ref differences_api20_ie).
|
||||
- the `I64` precision (aligned with the original model) for OpenVINO IR v11, ONNX, ov::Model and PaddlePaddle models, to match the [new behavior](@ref differences_api20_ie).
|
||||
- the `I64` precision (aligned with the original model) for OpenVINO IR v11, ONNX, ov::Model, PaddlePaddle and TensorFlow models, to match the [new behavior](@ref differences_api20_ie).
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
|
@ -17,8 +17,7 @@
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR),
|
||||
TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), ONNX, or PaddlePaddle model and execute it on preferred devices.
|
||||
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices.
|
||||
|
||||
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.
|
||||
|
||||
|
@ -10,7 +10,7 @@ Most available preprocessing steps can also be performed via command-line option
|
||||
## Code example - Saving Model with Preprocessing to OpenVINO IR
|
||||
|
||||
When some preprocessing steps cannot be integrated into the execution graph using Model Optimizer command-line options (for example, `YUV`->`RGB` color space conversion, `Resize`, etc.), it is possible to write a simple code which:
|
||||
- Reads the original model (OpenVINO IR, TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), ONNX, PaddlePaddle).
|
||||
- Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
|
||||
- Adds the preprocessing/postprocessing steps.
|
||||
- Saves resulting model as IR (`.xml` and `.bin`).
|
||||
|
||||
|
@ -16,9 +16,8 @@ This guide presents how to use OpenVINO securely with protected models.
|
||||
|
||||
After a model is optimized by the OpenVINO Model Optimizer, it's deployed
|
||||
to target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized
|
||||
model is stored on edge device and is executed by the OpenVINO Runtime.
|
||||
TensorFlow (check [TensorFlow Frontend Capabilities and Limitations](../resources/tensorflow_frontend.md)), ONNX
|
||||
and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
|
||||
model is stored on edge device and is executed by the OpenVINO Runtime.
|
||||
TensorFlow, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
|
||||
|
||||
Encrypting and optimizing model before deploying it to the edge device can be
|
||||
used to protect deep-learning models. The edge device should keep the stored model
|
||||
|
@ -73,13 +73,13 @@ Glossary of terms used in OpenVINO™
|
||||
| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
|
||||
|
||||
| OpenVINO™ API
|
||||
| The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle file formars, set input and output formats and execute the model on various devices.
|
||||
| The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow file formats, set input and output formats and execute the model on various devices.
|
||||
|
||||
| OpenVINO™ Runtime
|
||||
| A C++ library with a set of classes that you can use in your application to infer input tensors and get the results.
|
||||
|
||||
| <code>ov::Model</code>
|
||||
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle formats. Consists of model structure, weights and biases.
|
||||
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow formats. Consists of model structure, weights and biases.
|
||||
|
||||
| <code>ov::CompiledModel</code>
|
||||
| An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously.
|
||||
|
@ -30,14 +30,19 @@ ov_compiled_model_t* compiled_model = NULL;
|
||||
ov_core_compile_model_from_file(core, "model.pdmodel", "AUTO", 0, &compiled_model);
|
||||
//! [part2_3]
|
||||
}
|
||||
|
||||
//! [part2_4]
|
||||
ov_compiled_model_t* compiled_model = NULL;
|
||||
ov_core_compile_model_from_file(core, "model.pb", "AUTO", 0, &compiled_model);
|
||||
//! [part2_4]
|
||||
}
|
||||
|
||||
//! [part2_5]
|
||||
// Construct a model
|
||||
ov_model_t* model = NULL;
|
||||
ov_core_read_model(core, "model.xml", NULL, &model);
|
||||
ov_compiled_model_t* compiled_model = NULL;
|
||||
ov_core_compile_model(core, model, "AUTO", 0, &compiled_model);
|
||||
//! [part2_4]
|
||||
//! [part2_5]
|
||||
|
||||
|
||||
//! [part3]
|
||||
|
@ -29,6 +29,11 @@ ov::CompiledModel compiled_model = core.compile_model("model.pdmodel", "AUTO");
|
||||
}
|
||||
{
|
||||
//! [part2_4]
|
||||
ov::CompiledModel compiled_model = core.compile_model("model.pb", "AUTO");
|
||||
//! [part2_4]
|
||||
}
|
||||
{
|
||||
//! [part2_5]
|
||||
auto create_model = []() {
|
||||
std::shared_ptr<ov::Model> model;
|
||||
// To construct a model, please follow
|
||||
@ -37,7 +42,7 @@ auto create_model = []() {
|
||||
};
|
||||
std::shared_ptr<ov::Model> model = create_model();
|
||||
compiled_model = core.compile_model(model, "AUTO");
|
||||
//! [part2_4]
|
||||
//! [part2_5]
|
||||
}
|
||||
|
||||
//! [part3]
|
||||
|
@ -20,6 +20,9 @@ compiled_model = core.compile_model("model.onnx", "AUTO")
|
||||
compiled_model = core.compile_model("model.pdmodel", "AUTO")
|
||||
#! [part2_3]
|
||||
#! [part2_4]
|
||||
compiled_model = core.compile_model("model.pb", "AUTO")
|
||||
#! [part2_4]
|
||||
#! [part2_5]
|
||||
def create_model():
|
||||
# This example shows how to create ov::Function
|
||||
#
|
||||
@ -31,7 +34,7 @@ def create_model():
|
||||
|
||||
model = create_model()
|
||||
compiled_model = core.compile_model(model, "AUTO")
|
||||
#! [part2_4]
|
||||
#! [part2_5]
|
||||
|
||||
#! [part3]
|
||||
infer_request = compiled_model.create_infer_request()
|
||||
|
Loading…
Reference in New Issue
Block a user