Merge remote-tracking branch 'upstream/master' into itikhono/ts/refactoring
This commit is contained in:
commit
169a722212
@ -447,16 +447,10 @@ jobs:
|
||||
|
||||
- script: |
|
||||
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'IE CAPITests'
|
||||
|
||||
- script: |
|
||||
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'OV CAPITests'
|
||||
|
||||
- task: CMake@1
|
||||
|
@ -315,16 +315,10 @@ jobs:
|
||||
|
||||
- script: |
|
||||
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceEngineCAPITests.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'IE CAPITests'
|
||||
|
||||
- script: |
|
||||
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_capi_test.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'OV CAPITests'
|
||||
|
||||
- task: PublishTestResults@2
|
||||
|
@ -2,7 +2,13 @@
|
||||
|
||||
With Model Optimizer you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `--input_shape` and `--static_shape`.
|
||||
|
||||
@anchor when_to_specify_input_shapes
|
||||
@sphinxdirective
|
||||
|
||||
.. _when_to_specify_input_shapes:
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
## Specifying --input_shape Command-line Parameter
|
||||
Model Optimizer supports conversion of models with dynamic input shapes that contain undefined dimensions.
|
||||
However, if the shape of data is not going to change from one inference request to another,
|
||||
|
@ -8,169 +8,212 @@
|
||||
|
||||
troubleshooting_reshape_errors
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO™ enables you to change model input shape during the application runtime.
|
||||
It may be useful when you want to feed the model an input that has different size than the model input shape.
|
||||
The following instructions are for cases where you need to change the model input shape repeatedly.
|
||||
|
||||
.. note::
|
||||
|
||||
If you need to do this only once, prepare a model with updated shapes via
|
||||
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
For more information, refer to the :ref:`Specifying --input_shape Command-line Parameter <when_to_specify_input_shapes>` article.
|
||||
|
||||
|
||||
OpenVINO™ enables you to change model input shape during the application runtime. It may be useful when you want to feed the model an input that has different size than the model input shape. The following instructions are for cases where you need to change the model input shape repeatedly.
|
||||
The reshape method
|
||||
++++++++++++++++++++
|
||||
|
||||
> **NOTE**: If you need to do this only once, prepare a model with updated shapes via [Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide). For more information, refer to the [Specifying --input_shape Command-line Parameter](@ref when_to_specify_input_shapes) article.
|
||||
The reshape method is used as ``ov::Model::reshape`` in C++ and
|
||||
`Model.reshape <api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape>`__
|
||||
in Python. The method updates input shapes and propagates them down to the outputs
|
||||
of the model through all intermediate layers. The code below is an example of how
|
||||
to set a new batch size with the ``reshape`` method:
|
||||
|
||||
### The reshape method
|
||||
.. tab-set::
|
||||
|
||||
The reshape method is used as `ov::Model::reshape` in C++ and [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) in Python. The method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
|
||||
The code below is an example of how to set a new batch size with the `reshape` method:
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: picture_snippet
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet snippets/ShapeInference.cpp picture_snippet
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: Python
|
||||
:fragment: picture_snippet
|
||||
|
||||
@endsphinxtab
|
||||
The diagram below presents the results of using the method, where the size of
|
||||
model input is changed with an image input:
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. image:: _static/images/original_vs_reshaped_model.svg
|
||||
|
||||
@snippet docs/snippets/ShapeInference.py picture_snippet
|
||||
When using the ``reshape`` method, you may take one of the approaches:
|
||||
|
||||
@endsphinxtab
|
||||
.. _usage_of_reshape_method:
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
The diagram below presents the results of using the method, where the size of model input is changed with an image input:
|
||||
1. You can pass a new shape to the method in order to change the input shape of
|
||||
the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||
|
||||

|
||||
.. tab-set::
|
||||
|
||||
When using the `reshape` method, you may take one of the approaches:
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@anchor usage_of_reshape_method
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
#. You can pass a new shape to the method in order to change the input shape of the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: spatial_reshape
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: simple_spatials_change
|
||||
|
||||
|
||||
To do the opposite - to resize input image to match the input shapes of the model, use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
To do the opposite - to resize input image to match the input shapes of the model,
|
||||
use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
|
||||
|
||||
#. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
||||
2. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
||||
|
||||
.. tab:: Port
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
map<ov::Output<ov::Node>, ov::PartialShape specifies input by passing actual input port:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
`openvino.runtime.Output` dictionary key specifies input by passing actual input object.
|
||||
Dictionary values representing new shapes could be `PartialShape`:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab:: Index
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
map<size_t, ov::PartialShape> specifies input by its index:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
`int` dictionary key specifies input by its index.
|
||||
Dictionary values representing new shapes could be `tuple`:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab:: Tensor Name
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
map<string, ov::PartialShape> specifies input by its name:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
`str` dictionary key specifies input by its name.
|
||||
Dictionary values representing new shapes could be `str`:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxdirective
|
||||
.. tab-item:: Port
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<ov::Output<ov::Node>, ov::PartialShape`` specifies input by passing actual input port:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``openvino.runtime.Output`` dictionary key specifies input by passing actual input object.
|
||||
Dictionary values representing new shapes could be ``PartialShape``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab-item:: Index
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<size_t, ov::PartialShape>`` specifies input by its index:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``int`` dictionary key specifies input by its index.
|
||||
Dictionary values representing new shapes could be ``tuple``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab-item:: Tensor Name
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<string, ov::PartialShape>`` specifies input by its name:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``str`` dictionary key specifies input by its name.
|
||||
Dictionary values representing new shapes could be ``str``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
|
||||
You can find the usage scenarios of the `reshape` method in [Hello Reshape SSD Samples](@ref openvino_inference_engine_samples_hello_reshape_ssd_README).
|
||||
You can find the usage scenarios of the ``reshape`` method in
|
||||
:doc:`Hello Reshape SSD Samples <openvino_inference_engine_samples_hello_reshape_ssd_README>`.
|
||||
|
||||
> **NOTE**: In some cases, models may not be ready to be reshaped. Therefore, a new input shape cannot be set neither with [Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) nor the `reshape` method.
|
||||
.. note::
|
||||
|
||||
### The set_batch method
|
||||
In some cases, models may not be ready to be reshaped. Therefore, a new input
|
||||
shape cannot be set neither with :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
|
||||
nor the ``reshape`` method.
|
||||
|
||||
The set_batch method
|
||||
++++++++++++++++++++
|
||||
|
||||
The meaning of the model batch may vary depending on the model design.
|
||||
To change the batch dimension of the model, [set the layout](@ref declare_model_s_layout) and call the `set_batch` method.
|
||||
To change the batch dimension of the model, :ref:`set the layout <declare_model_s_layout>` and call the ``set_batch`` method.
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@snippet snippets/ShapeInference.cpp set_batch
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: set_batch
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: Python
|
||||
:fragment: set_batch
|
||||
|
||||
@snippet docs/snippets/ShapeInference.py set_batch
|
||||
|
||||
@endsphinxtab
|
||||
The ``set_batch`` method is a high-level API of the reshape functionality, so all
|
||||
information about the ``reshape`` method implications are applicable for ``set_batch``
|
||||
too, including the troubleshooting section.
|
||||
|
||||
@endsphinxtabset
|
||||
Once you set the input shape of the model, call the ``compile_model`` method to
|
||||
get a ``CompiledModel`` object for inference with updated shapes.
|
||||
|
||||
The `set_batch` method is a high-level API of the reshape functionality, so all information about the `reshape` method implications are applicable for `set_batch` too, including the troubleshooting section.
|
||||
There are other approaches to change model input shapes during the stage of
|
||||
:ref:`IR generation <when_to_specify_input_shapes>` or :ref:`model representation <openvino_docs_OV_UG_Model_Representation>` in OpenVINO Runtime.
|
||||
|
||||
Once you set the input shape of the model, call the `compile_model` method to get a `CompiledModel` object for inference with updated shapes.
|
||||
|
||||
There are other approaches to change model input shapes during the stage of [IR generation](@ref when_to_specify_input_shapes) or [model representation](@ref openvino_docs_OV_UG_Model_Representation) in OpenVINO Runtime.
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. important::
|
||||
|
||||
Shape-changing functionality could be used to turn dynamic model input into a static one and vice versa. Always set static shapes when the shape of data is NOT going to change from one inference to another. Setting static shapes can avoid memory and runtime overheads for dynamic shapes which may vary depending on hardware plugin and model used. For more information, refer to the :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`.
|
||||
Shape-changing functionality could be used to turn dynamic model input into a
|
||||
static one and vice versa. Always set static shapes when the shape of data is
|
||||
NOT going to change from one inference to another. Setting static shapes can
|
||||
avoid memory and runtime overheads for dynamic shapes which may vary depending
|
||||
on hardware plugin and model used. For more information, refer to the
|
||||
:doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`.
|
||||
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Extensibility documentation <openvino_docs_Extensibility_UG_Intro>` - describes a special mechanism in OpenVINO that allows adding support of shape inference for custom operations.
|
||||
* `ov::Model::reshape <classov_1_1Model.html#doxid-classov-1-1-model-1aa21aff80598d5089d591888a4c7f33ae>`__ - in OpenVINO Runtime C++ API
|
||||
* `Model.reshape <api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape>`__ - in OpenVINO Runtime Python API.
|
||||
* :doc:`Dynamic Shapes <openvino_docs_OV_UG_DynamicShapes>`
|
||||
* :doc:`OpenVINO samples <openvino_docs_OV_UG_Samples_Overview>`
|
||||
* :doc:`Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
## Additional Resources
|
||||
|
||||
* [Extensibility documentation](@ref openvino_docs_Extensibility_UG_Intro) - describes a special mechanism in OpenVINO that allows adding support of shape inference for custom operations.
|
||||
* `ov::Model::reshape` - in OpenVINO Runtime C++ API
|
||||
* [Model.reshape](api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape) - in OpenVINO Runtime Python API.
|
||||
* [Dynamic Shapes](@ref openvino_docs_OV_UG_DynamicShapes)
|
||||
* [OpenVINO samples](@ref openvino_docs_OV_UG_Samples_Overview)
|
||||
* [Preprocessing API](@ref openvino_docs_OV_UG_Preprocessing_Overview)
|
||||
|
@ -1,391 +1,409 @@
|
||||
# Configuring Devices {#openvino_2_0_configure_devices}
|
||||
|
||||
Inference Engine API provides the [ability to configure devices](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html) via configuration keys and [get device specific metrics](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html#getmetric). The values taken from `InferenceEngine::Core::GetConfig` are requested by the string name, while the return type is `InferenceEngine::Parameter`, making users lost on what the actual type is stored in this parameter.
|
||||
@sphinxdirective
|
||||
|
||||
API 2.0 solves these issues by introducing [properties](../supported_plugins/config_properties.md), which unify metrics and configuration key concepts. The main advantage is that they have the C++ type:
|
||||
|
||||
```
|
||||
static constexpr Property<std::string> full_name{"FULL_DEVICE_NAME"};
|
||||
```
|
||||
Inference Engine API provides the `ability to configure devices <https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html>`__ via configuration keys and `get device specific metrics <https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_InferenceEngine_QueryAPI.html#getmetric>`__. The values taken from `InferenceEngine::Core::GetConfig <namespaceInferenceEngine.html#doxid-namespace-inference-engine-1aff2231f886c9f8fc9c226fd343026789>`__ are requested by the string name, while the return type is `InferenceEngine::Parameter <namespaceInferenceEngine.html#doxid-namespace-inference-engine-1aff2231f886c9f8fc9c226fd343026789>`__, making users lost on what the actual type is stored in this parameter.
|
||||
|
||||
API 2.0 solves these issues by introducing :doc:`properties <openvino_docs_OV_UG_query_api>`, which unify metrics and configuration key concepts. The main advantage is that they have the C++ type:
|
||||
|
||||
.. code-block::
|
||||
|
||||
static constexpr Property<std::string> full_name{"FULL_DEVICE_NAME"};
|
||||
|
||||
|
||||
where the property can be requested from an inference device as:
|
||||
|
||||
@snippet ov_properties_migration.cpp core_get_ro_property
|
||||
|
||||
The snippets in the following sections demostrate the device configurations for migrating from Inference Engine to API 2.0.
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_get_ro_property
|
||||
|
||||
## Setting Configuration Values
|
||||
|
||||
The snippets in the following sections demonstrate the device configurations for migrating from Inference Engine to API 2.0.
|
||||
|
||||
Setting Configuration Values
|
||||
############################
|
||||
|
||||
**Inference Engine API**
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Devices}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_set_config
|
||||
.. tab-item:: Devices
|
||||
:sync: devices
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_set_config
|
||||
|
||||
@sphinxtab{Model Loading}
|
||||
.. tab-item:: Model Loading
|
||||
:sync: model-loading
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_load_network
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_load_network
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution
|
||||
:sync: execution
|
||||
|
||||
@sphinxtab{Execution}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: executable_network_set_config
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp executable_network_set_config
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtabset
|
||||
.. tab-item:: Devices
|
||||
:sync: devices
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_set_config
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab-item:: Model Loading
|
||||
:sync: model-loading
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_load_network
|
||||
|
||||
@sphinxtab{Devices}
|
||||
.. tab-item:: Execution
|
||||
:sync: execution
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_set_config
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: executable_network_set_config
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: C
|
||||
:sync: c
|
||||
|
||||
@sphinxtab{Model Loading}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_load_network
|
||||
.. tab-item:: Devices
|
||||
:sync: devices
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_set_config
|
||||
|
||||
@sphinxtab{Execution}
|
||||
.. tab-item:: Model Loading
|
||||
:sync: model-loading
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py executable_network_set_config
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_load_network
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution
|
||||
:sync: execution
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: executable_network_set_config
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{C}
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{Devices}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_set_config
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Model Loading}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_load_network
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Execution}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c executable_network_set_config
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
**API 2.0**
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Devices}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_set_property
|
||||
.. tab-item:: Devices
|
||||
:sync: devices
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_set_property
|
||||
|
||||
@sphinxtab{Model Loading}
|
||||
.. tab-item:: Model Loading
|
||||
:sync: model-loading
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_compile_model
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_compile_model
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution
|
||||
:sync: execution
|
||||
|
||||
@sphinxtab{Execution}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: compiled_model_set_property
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp compiled_model_set_property
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtabset
|
||||
.. tab-item:: Devices
|
||||
:sync: devices
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_set_property
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab-item:: Model Loading
|
||||
:sync: model-loading
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_compile_model
|
||||
|
||||
@sphinxtab{Devices}
|
||||
.. tab-item:: Execution
|
||||
:sync: execution
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_set_property
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: compiled_model_set_property
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: C
|
||||
:sync: c
|
||||
|
||||
@sphinxtab{Model Loading}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_compile_model
|
||||
.. tab-item:: Devices
|
||||
:sync: devices
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_set_property
|
||||
|
||||
@sphinxtab{Execution}
|
||||
.. tab-item:: Model Loading
|
||||
:sync: model-loading
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py compiled_model_set_property
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_compile_model
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution
|
||||
:sync: execution
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: compiled_model_set_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{C}
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{Devices}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_set_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Model Loading}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_compile_model
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Execution}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c compiled_model_set_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
## Getting Information
|
||||
|
||||
**Inference Engine API**
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Device Configuration}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_get_config
|
||||
.. tab-item:: Device Configuration
|
||||
:sync: device-config
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_get_config
|
||||
|
||||
@sphinxtab{Device metrics}
|
||||
.. tab-item:: Device metrics
|
||||
:sync: device-metrics
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_get_metric
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_get_metric
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution config
|
||||
:sync: execution-config
|
||||
|
||||
@sphinxtab{Execution config}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: executable_network_set_config
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp executable_network_get_config
|
||||
.. tab-item:: Execution metrics
|
||||
:sync: execution-metrics
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: executable_network_get_metric
|
||||
|
||||
@sphinxtab{Execution metrics}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp executable_network_get_metric
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Device Configuration
|
||||
:sync: device-config
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_get_config
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Device metrics
|
||||
:sync: device-metrics
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_get_metric
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-item:: Execution config
|
||||
:sync: execution-config
|
||||
|
||||
@sphinxtab{Device Configuration}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: executable_network_set_config
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_get_config
|
||||
.. tab-item:: Execution metrics
|
||||
:sync: execution-metrics
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: executable_network_get_metric
|
||||
|
||||
@sphinxtab{Device metrics}
|
||||
.. tab-item:: C
|
||||
:sync: c
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_get_metric
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Device Configuration
|
||||
:sync: device-config
|
||||
|
||||
@sphinxtab{Execution config}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_get_config
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py executable_network_get_config
|
||||
.. tab-item:: Device metrics
|
||||
:sync: device-metrics
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_get_metric
|
||||
|
||||
@sphinxtab{Execution metrics}
|
||||
.. tab-item:: Execution config
|
||||
:sync: execution-config
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py executable_network_get_metric
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: executable_network_set_config
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution metrics
|
||||
:sync: execution-metrics
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: executable_network_get_metric
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{C}
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{Device Configuration}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_get_config
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Device metrics}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_get_metric
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Execution config}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c executable_network_get_config
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Execution metrics}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c executable_network_get_metric
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
**API 2.0**
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Device Configuration}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_get_rw_property
|
||||
.. tab-item:: Device Configuration
|
||||
:sync: device-config
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_get_rw_property
|
||||
|
||||
@sphinxtab{Device metrics}
|
||||
.. tab-item:: Device metrics
|
||||
:sync: device-metrics
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp core_get_ro_property
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: core_get_ro_property
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution config
|
||||
:sync: execution-config
|
||||
|
||||
@sphinxtab{Execution config}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: compiled_model_get_rw_property
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp compiled_model_get_rw_property
|
||||
.. tab-item:: Execution metrics
|
||||
:sync: execution-metrics
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.cpp
|
||||
:language: cpp
|
||||
:fragment: compiled_model_get_ro_property
|
||||
|
||||
@sphinxtab{Execution metrics}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.cpp compiled_model_get_ro_property
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Device Configuration
|
||||
:sync: device-config
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_get_rw_property
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Device metrics
|
||||
:sync: device-metrics
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: core_get_ro_property
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-item:: Execution config
|
||||
:sync: execution-config
|
||||
|
||||
@sphinxtab{Device Configuration}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: compiled_model_get_rw_property
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_get_rw_property
|
||||
.. tab-item:: Execution metrics
|
||||
:sync: execution-metrics
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.py
|
||||
:language: python
|
||||
:fragment: compiled_model_get_ro_property
|
||||
|
||||
@sphinxtab{Device metrics}
|
||||
.. tab-item:: C
|
||||
:sync: c
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py core_get_ro_property
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Device Configuration
|
||||
:sync: device-config
|
||||
|
||||
@sphinxtab{Execution config}
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_get_rw_property
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py compiled_model_get_rw_property
|
||||
.. tab-item:: Device metrics
|
||||
:sync: device-metrics
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: core_get_ro_property
|
||||
|
||||
@sphinxtab{Execution metrics}
|
||||
.. tab-item:: Execution config
|
||||
:sync: execution-config
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.py compiled_model_get_ro_property
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: compiled_model_get_rw_property
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: Execution metrics
|
||||
:sync: execution-metrics
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_properties_migration.c
|
||||
:language: c
|
||||
:fragment: compiled_model_get_ro_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{C}
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{Device Configuration}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_get_rw_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Device metrics}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c core_get_ro_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Execution config}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c compiled_model_get_rw_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Execution metrics}
|
||||
|
||||
@snippet docs/snippets/ov_properties_migration.c compiled_model_get_ro_property
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
@endsphinxdirective
|
||||
|
@ -1,38 +1,56 @@
|
||||
# Model Creation in OpenVINO™ Runtime {#openvino_2_0_model_creation}
|
||||
|
||||
OpenVINO™ Runtime with API 2.0 includes the nGraph engine as a common part. The `ngraph` namespace has been changed to `ov`, but all other parts of the ngraph API have been preserved.
|
||||
@sphinxdirective
|
||||
|
||||
OpenVINO™ Runtime with API 2.0 includes the nGraph engine as a common part. The ``ngraph`` namespace has been changed to ``ov``, but all other parts of the ngraph API have been preserved.
|
||||
|
||||
The code snippets below show how to change the application code for migration to API 2.0.
|
||||
|
||||
## nGraph API
|
||||
nGraph API
|
||||
####################
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtab{C++}
|
||||
@snippet docs/snippets/ngraph.cpp ngraph:graph
|
||||
@endsphinxtab
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Python}
|
||||
@snippet docs/snippets/ngraph.py ngraph:graph
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ngraph.cpp
|
||||
:language: cpp
|
||||
:fragment: ngraph:graph
|
||||
|
||||
@endsphinxtabset
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
## API 2.0
|
||||
.. doxygensnippet:: docs/snippets/ngraph.py
|
||||
:language: Python
|
||||
:fragment: ngraph:graph
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
@snippet docs/snippets/ov_graph.cpp ov:graph
|
||||
@endsphinxtab
|
||||
API 2.0
|
||||
####################
|
||||
|
||||
@sphinxtab{Python}
|
||||
@snippet docs/snippets/ov_graph.py ov:graph
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
## Additional Resources
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
- [Hello Model Creation C++ Sample](../../../samples/cpp/model_creation_sample/README.md)
|
||||
- [Hello Model Creation Python Sample](../../../samples/python/model_creation_sample/README.md)
|
||||
.. doxygensnippet:: docs/snippets/ov_graph.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:graph
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_graph.py
|
||||
:language: Python
|
||||
:fragment: ov:graph
|
||||
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Hello Model Creation C++ Sample <openvino_inference_engine_samples_model_creation_sample_README>`
|
||||
* :doc:`Hello Model Creation Python Sample <openvino_inference_engine_ie_bridges_python_sample_model_creation_sample_README>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -10,162 +10,188 @@
|
||||
openvino_docs_OV_UG_Layout_Overview
|
||||
openvino_docs_OV_UG_Preprocess_Usecase_save
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Introduction
|
||||
Introduction
|
||||
####################
|
||||
|
||||
When input data does not fit the model input tensor perfectly, additional operations/steps are needed to transform the data to the format expected by the model. These operations are known as "preprocessing".
|
||||
|
||||
### Example
|
||||
Consider the following standard example: deep learning model expects input with the `{1, 3, 224, 224}` shape, `FP32` precision, `RGB` color channels order, and it requires data normalization (subtract mean and divide by scale factor). However, there is just a `640x480` `BGR` image (data is `{480, 640, 3}`). This means that the following operations must be performed:
|
||||
- Convert `U8` buffer to `FP32`.
|
||||
- Transform to `planar` format: from `{1, 480, 640, 3}` to `{1, 3, 480, 640}`.
|
||||
- Resize image from 640x480 to 224x224.
|
||||
- Make `BGR->RGB` conversion as model expects `RGB`.
|
||||
- For each pixel, subtract mean values and divide by scale factor.
|
||||
Example
|
||||
++++++++++++++++++++
|
||||
|
||||
Consider the following standard example: deep learning model expects input with the ``{1, 3, 224, 224}`` shape, ``FP32`` precision, ``RGB`` color channels order, and it requires data normalization (subtract mean and divide by scale factor). However, there is just a ``640x480 BGR`` image (data is ``{480, 640, 3}``). This means that the following operations must be performed:
|
||||
|
||||
* Convert ``U8`` buffer to ``FP32``.
|
||||
* Transform to ``planar`` format: from ``{1, 480, 640, 3}`` to ``{1, 3, 480, 640}``.
|
||||
* Resize image from 640x480 to 224x224.
|
||||
* Make ``BGR->RGB`` conversion as model expects ``RGB``.
|
||||
* For each pixel, subtract mean values and divide by scale factor.
|
||||
|
||||
|
||||

|
||||
.. image:: _static/images/preprocess_not_fit.png
|
||||
|
||||
|
||||
Even though it is relatively easy to implement all these steps in the application code manually, before actual inference, it is also possible with the use of Preprocessing API. Advantages of using the API are:
|
||||
- Preprocessing API is easy to use.
|
||||
- Preprocessing steps will be integrated into execution graph and will be performed on selected device (CPU/GPU/etc.) rather than always being executed on CPU. This will improve selected device utilization which is always good.
|
||||
|
||||
## Preprocessing API
|
||||
* Preprocessing API is easy to use.
|
||||
* Preprocessing steps will be integrated into execution graph and will be performed on selected device (CPU/GPU/etc.) rather than always being executed on CPU. This will improve selected device utilization which is always good.
|
||||
|
||||
Preprocessing API
|
||||
####################
|
||||
|
||||
Intuitively, preprocessing API consists of the following parts:
|
||||
1. **Tensor** - declares user data format, like shape, [layout](./layout_overview.md), precision, color format from actual user's data.
|
||||
2. **Steps** - describes sequence of preprocessing steps which need to be applied to user data.
|
||||
3. **Model** - specifies model data format. Usually, precision and shape are already known for model, only additional information, like [layout](./layout_overview.md) can be specified.
|
||||
|
||||
> **NOTE**: Graph modifications of a model shall be performed after the model is read from a drive and **before** it is loaded on the actual device.
|
||||
1. **Tensor** - declares user data format, like shape, :doc:`layout <openvino_docs_OV_UG_Layout_Overview>`, precision, color format from actual user's data.
|
||||
2. **Steps** - describes sequence of preprocessing steps which need to be applied to user data.
|
||||
3. **Model** - specifies model data format. Usually, precision and shape are already known for model, only additional information, like :doc:`layout <openvino_docs_OV_UG_Layout_Overview>` can be specified.
|
||||
|
||||
### PrePostProcessor Object
|
||||
.. note::
|
||||
|
||||
The `ov::preprocess::PrePostProcessor` class allows specifying preprocessing and postprocessing steps for a model read from disk.
|
||||
Graph modifications of a model shall be performed after the model is read from a drive and **before** it is loaded on the actual device.
|
||||
|
||||
@sphinxtabset
|
||||
PrePostProcessor Object
|
||||
+++++++++++++++++++++++
|
||||
|
||||
@sphinxtab{C++}
|
||||
The `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ class allows specifying preprocessing and postprocessing steps for a model read from disk.
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:create
|
||||
.. tab-set::
|
||||
|
||||
@endsphinxtab
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:create
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:create
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:create
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
### Declare User's Data Format
|
||||
Declare User's Data Format
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
To address particular input of a model/preprocessor, use the `ov::preprocess::PrePostProcessor::input(input_name)` method.
|
||||
To address particular input of a model/preprocessor, use the ``ov::preprocess::PrePostProcessor::input(input_name)`` method.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:tensor
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:tensor
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:tensor
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:tensor
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
Below is all the specified input information:
|
||||
- Precision is `U8` (unsigned 8-bit integer).
|
||||
- Data represents tensor with the `{1,480,640,3}` shape.
|
||||
- [Layout](./layout_overview.md) is "NHWC". It means: `height=480`, `width=640`, `channels=3`'.
|
||||
- Color format is `BGR`.
|
||||
|
||||
@anchor declare_model_s_layout
|
||||
### Declaring Model Layout
|
||||
|
||||
Model input already has information about precision and shape. Preprocessing API is not intended to modify this. The only thing that may be specified is input data [layout](./layout_overview.md)
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:model
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:model
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
* Precision is ``U8`` (unsigned 8-bit integer).
|
||||
* Data represents tensor with the ``{1,480,640,3}`` shape.
|
||||
* :doc:`Layout <openvino_docs_OV_UG_Layout_Overview>` is "NHWC". It means: ``height=480``, ``width=640``, ``channels=3``.
|
||||
* Color format is ``BGR``.
|
||||
|
||||
|
||||
Now, if the model input has `{1,3,224,224}` shape, preprocessing will be able to identify the `height=224`, `width=224`, and `channels=3` of that model. The `height`/`width` information is necessary for `resize`, and `channels` is needed for mean/scale normalization.
|
||||
.. _declare_model_s_layout:
|
||||
|
||||
### Preprocessing Steps
|
||||
Declaring Model Layout
|
||||
++++++++++++++++++++++
|
||||
|
||||
Model input already has information about precision and shape. Preprocessing API is not intended to modify this. The only thing that may be specified is input data :doc:`layout <openvino_docs_OV_UG_Layout_Overview>`
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:model
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:model
|
||||
|
||||
|
||||
Now, if the model input has ``{1,3,224,224}`` shape, preprocessing will be able to identify the ``height=224``, ``width=224``, and ``channels=3`` of that model. The ``height``/ ``width`` information is necessary for ``resize``, and ``channels`` is needed for mean/scale normalization.
|
||||
|
||||
Preprocessing Steps
|
||||
++++++++++++++++++++
|
||||
|
||||
Now, the sequence of preprocessing steps can be defined:
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-set::
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:steps
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@endsphinxtab
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:steps
|
||||
|
||||
@sphinxtab{Python}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:steps
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:steps
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
Perform the following:
|
||||
|
||||
1. Convert `U8` to `FP32` precision.
|
||||
2. Convert current color format from `BGR` to `RGB`.
|
||||
3. Resize to `height`/`width` of a model. Be aware that if a model accepts dynamic size e.g., `{?, 3, ?, ?}`, `resize` will not know how to resize the picture. Therefore, in this case, target `height`/`width` should be specified. For more details, see also the `ov::preprocess::PreProcessSteps::resize()`.
|
||||
4. Subtract mean from each channel. In this step, color format is already `RGB`, so `100.5` will be subtracted from each `Red` component, and `101.5` will be subtracted from each `Blue` one.
|
||||
5. Divide each pixel data to appropriate scale value. In this example, each `Red` component will be divided by 50, `Green` by 51, and `Blue` by 52 respectively.
|
||||
6. Keep in mind that the last `convert_layout` step is commented out as it is not necessary to specify the last layout conversion. The `PrePostProcessor` will do such conversion automatically.
|
||||
1. Convert ``U8`` to ``FP32`` precision.
|
||||
2. Convert current color format from ``BGR`` to ``RGB``.
|
||||
3. Resize to ``height``/ ``width`` of a model. Be aware that if a model accepts dynamic size e.g., ``{?, 3, ?, ?}``, ``resize`` will not know how to resize the picture. Therefore, in this case, target ``height``/ ``width`` should be specified. For more details, see also the `ov::preprocess::PreProcessSteps::resize() <classov_1_1preprocess_1_1PreProcessSteps.html#doxid-classov-1-1preprocess-1-1-pre-process-steps-1a40dab78be1222fee505ed6a13400efe6>`__.
|
||||
4. Subtract mean from each channel. In this step, color format is already ``RGB``, so ``100.5`` will be subtracted from each ``Red`` component, and ``101.5`` will be subtracted from each ``Blue`` one.
|
||||
5. Divide each pixel data to appropriate scale value. In this example, each ``Red`` component will be divided by 50, ``Green`` by 51, and ``Blue`` by 52 respectively.
|
||||
6. Keep in mind that the last ``convert_layout`` step is commented out as it is not necessary to specify the last layout conversion. The ``PrePostProcessor`` will do such conversion automatically.
|
||||
|
||||
### Integrating Steps into a Model
|
||||
Integrating Steps into a Model
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
Once the preprocessing steps have been finished the model can be finally built. It is possible to display `PrePostProcessor` configuration for debugging purposes:
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:build
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:build
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
Once the preprocessing steps have been finished the model can be finally built. It is possible to display ``PrePostProcessor`` configuration for debugging purposes:
|
||||
|
||||
|
||||
The `model` will accept `U8` input with the shape of `{1, 480, 640, 3}` and the `BGR` channel order. All conversion steps will be integrated into the execution graph. Now, model can be loaded on the device and the image can be passed to the model without any data manipulation in the application.
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:build
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: python
|
||||
:fragment: ov:preprocess:build
|
||||
|
||||
|
||||
## Additional Resources
|
||||
The ``model`` will accept ``U8`` input with the shape of ``{1, 480, 640, 3}`` and the ``BGR`` channel order. All conversion steps will be integrated into the execution graph. Now, model can be loaded on the device and the image can be passed to the model without any data manipulation in the application.
|
||||
|
||||
* [Preprocessing Details](@ref openvino_docs_OV_UG_Preprocessing_Details)
|
||||
* [Layout API overview](@ref openvino_docs_OV_UG_Layout_Overview)
|
||||
* <code>ov::preprocess::PrePostProcessor</code> C++ class documentation
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Preprocessing Details <openvino_docs_OV_UG_Preprocessing_Details>`
|
||||
* :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`
|
||||
* `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ C++ class documentation
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,84 +1,111 @@
|
||||
# Use Case - Integrate and Save Preprocessing Steps Into IR {#openvino_docs_OV_UG_Preprocess_Usecase_save}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
Previous sections covered the topic of the [preprocessing steps](@ref openvino_docs_OV_UG_Preprocessing_Details) and the overview of [Layout](@ref openvino_docs_OV_UG_Layout_Overview) API.
|
||||
Previous sections covered the topic of the :doc:`preprocessing steps <openvino_docs_OV_UG_Preprocessing_Details>`
|
||||
and the overview of :doc:`Layout <openvino_docs_OV_UG_Layout_Overview>` API.
|
||||
|
||||
For many applications, it is also important to minimize read/load time of a model. Therefore, performing integration of preprocessing steps every time on application startup, after `ov::runtime::Core::read_model`, may seem inconvenient. In such cases, once pre and postprocessing steps have been added, it can be useful to store new execution model to OpenVINO Intermediate Representation (OpenVINO IR, `.xml` format).
|
||||
For many applications, it is also important to minimize read/load time of a model.
|
||||
Therefore, performing integration of preprocessing steps every time on application
|
||||
startup, after ``ov::runtime::Core::read_model``, may seem inconvenient. In such cases,
|
||||
once pre and postprocessing steps have been added, it can be useful to store new execution
|
||||
model to OpenVINO Intermediate Representation (OpenVINO IR, `.xml` format).
|
||||
|
||||
Most available preprocessing steps can also be performed via command-line options, using Model Optimizer. For details on such command-line options, refer to the [Optimizing Preprocessing Computation](../MO_DG/prepare_model/Additional_Optimizations.md).
|
||||
Most available preprocessing steps can also be performed via command-line options,
|
||||
using Model Optimizer. For details on such command-line options, refer to the
|
||||
:doc:`Optimizing Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`.
|
||||
|
||||
## Code example - Saving Model with Preprocessing to OpenVINO IR
|
||||
Code example - Saving Model with Preprocessing to OpenVINO IR
|
||||
#############################################################
|
||||
|
||||
When some preprocessing steps cannot be integrated into the execution graph using Model Optimizer command-line options (for example, `YUV`->`RGB` color space conversion, `Resize`, etc.), it is possible to write a simple code which:
|
||||
- Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
|
||||
- Adds the preprocessing/postprocessing steps.
|
||||
- Saves resulting model as IR (`.xml` and `.bin`).
|
||||
When some preprocessing steps cannot be integrated into the execution graph using
|
||||
Model Optimizer command-line options (for example, ``YUV``->``RGB`` color space conversion,
|
||||
``Resize``, etc.), it is possible to write a simple code which:
|
||||
|
||||
Consider the example, where an original ONNX model takes one `float32` input with the `{1, 3, 224, 224}` shape, the `RGB` channel order, and mean/scale values applied. In contrast, the application provides `BGR` image buffer with a non-fixed size and input images as batches of two. Below is the model conversion code that can be applied in the model preparation script for such a case.
|
||||
* Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
|
||||
* Adds the preprocessing/postprocessing steps.
|
||||
* Saves resulting model as IR (``.xml`` and ``.bin``).
|
||||
|
||||
- Includes / Imports
|
||||
Consider the example, where an original ONNX model takes one ``float32`` input with the
|
||||
``{1, 3, 224, 224}`` shape, the ``RGB`` channel order, and mean/scale values applied.
|
||||
In contrast, the application provides ``BGR`` image buffer with a non-fixed size and
|
||||
input images as batches of two. Below is the model conversion code that can be applied
|
||||
in the model preparation script for such a case.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:save_headers
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:save_headers
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
- Preprocessing & Saving to the OpenVINO IR code.
|
||||
|
||||
@sphinxtabset
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:save
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:save
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
* Includes / Imports
|
||||
|
||||
|
||||
## Application Code - Load Model to Target Device
|
||||
.. tab-set::
|
||||
|
||||
After this, the application code can load a saved file and stop preprocessing. In this case, enable [model caching](./Model_caching_overview.md) to minimize load time when the cached model is available.
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:save_headers
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.cpp ov:preprocess:save_load
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_preprocessing.py ov:preprocess:save_load
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: Python
|
||||
:fragment: ov:preprocess:save_headers
|
||||
|
||||
|
||||
## Additional Resources
|
||||
* [Preprocessing Details](@ref openvino_docs_OV_UG_Preprocessing_Details)
|
||||
* [Layout API overview](@ref openvino_docs_OV_UG_Layout_Overview)
|
||||
* [Model Optimizer - Optimize Preprocessing Computation](../MO_DG/prepare_model/Additional_Optimizations.md)
|
||||
* [Model Caching Overview](./Model_caching_overview.md)
|
||||
* The `ov::preprocess::PrePostProcessor` C++ class documentation
|
||||
* The `ov::pass::Serialize` - pass to serialize model to XML/BIN
|
||||
* The `ov::set_batch` - update batch dimension for a given model
|
||||
* Preprocessing & Saving to the OpenVINO IR code.
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:save
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: Python
|
||||
:fragment: ov:preprocess:save
|
||||
|
||||
|
||||
Application Code - Load Model to Target Device
|
||||
##############################################
|
||||
|
||||
After this, the application code can load a saved file and stop preprocessing. In this case, enable
|
||||
:doc:`model caching <openvino_docs_OV_UG_Model_caching_overview>` to minimize load
|
||||
time when the cached model is available.
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.cpp
|
||||
:language: cpp
|
||||
:fragment: ov:preprocess:save_load
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_preprocessing.py
|
||||
:language: Python
|
||||
:fragment: ov:preprocess:save_load
|
||||
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Preprocessing Details <openvino_docs_OV_UG_Preprocessing_Details>`
|
||||
* :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`
|
||||
* :doc:`Model Optimizer - Optimize Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`
|
||||
* :doc:`Model Caching Overview<openvino_docs_OV_UG_Model_caching_overview>`
|
||||
* The `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>` C++ class documentation
|
||||
* The `ov::pass::Serialize <classov_1_1pass_1_1Serialize.html#doxid-classov-1-1pass-1-1-serialize>` - pass to serialize model to XML/BIN
|
||||
* The `ov::set_batch <namespaceov.html#doxid-namespaceov-1a3314e2ff91fcc9ffec05b1a77c37862b>` - update batch dimension for a given model
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -11,9 +11,7 @@ target_link_libraries(${TARGET_NAME} PRIVATE openvino_c commonTestUtils gtest_ma
|
||||
|
||||
target_compile_definitions(${TARGET_NAME}
|
||||
PRIVATE
|
||||
$<$<BOOL:${ENABLE_GAPI_PREPROCESSING}>:ENABLE_GAPI_PREPROCESSING>
|
||||
DATA_PATH=\"${DATA_PATH}\"
|
||||
MODELS_PATH=\"${MODELS_PATH}\")
|
||||
$<$<BOOL:${ENABLE_GAPI_PREPROCESSING}>:ENABLE_GAPI_PREPROCESSING>)
|
||||
|
||||
if(ENABLE_AUTO OR ENABLE_MULTI)
|
||||
add_dependencies(${TARGET_NAME} openvino_auto_plugin)
|
||||
@ -55,11 +53,6 @@ target_link_libraries(${TARGET_NAME} PRIVATE openvino_c openvino::util
|
||||
target_include_directories(${TARGET_NAME} PUBLIC
|
||||
$<BUILD_INTERFACE:${OPENVINO_API_SOURCE_DIR}/include>)
|
||||
|
||||
target_compile_definitions(${TARGET_NAME}
|
||||
PRIVATE
|
||||
DATA_PATH=\"${DATA_PATH}\"
|
||||
MODELS_PATH=\"${MODELS_PATH}\")
|
||||
|
||||
if(TARGET OpenCL::OpenCL)
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE OpenCL::OpenCL)
|
||||
endif()
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -6,18 +6,26 @@
|
||||
|
||||
namespace {
|
||||
|
||||
class ov_compiled_model : public ::testing::TestWithParam<std::string> {};
|
||||
class ov_compiled_model_test : public ov_capi_test_base {
|
||||
void SetUp() override {
|
||||
ov_capi_test_base::SetUp();
|
||||
}
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_compiled_model, ::testing::Values("CPU"));
|
||||
void TearDown() override {
|
||||
ov_capi_test_base::TearDown();
|
||||
}
|
||||
};
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_inputs_size) {
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_compiled_model_test, ::testing::Values("CPU"));
|
||||
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_inputs_size) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -33,14 +41,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_inputs_size) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_input) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_input) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -57,14 +65,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_input) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_input_by_index) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_input_by_index) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -85,14 +93,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_input_by_index) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_input_by_name) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_input_by_name) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -113,7 +121,7 @@ TEST_P(ov_compiled_model, ov_compiled_model_input_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, set_and_get_property) {
|
||||
TEST_P(ov_compiled_model_test, set_and_get_property) {
|
||||
// It seems that all set_property() for CPU plugin are not implement in compiled_model.
|
||||
auto device_name = "MULTI:GPU,CPU";
|
||||
ov_core_t* core = nullptr;
|
||||
@ -128,7 +136,7 @@ TEST_P(ov_compiled_model, set_and_get_property) {
|
||||
}
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -152,14 +160,14 @@ TEST_P(ov_compiled_model, set_and_get_property) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, get_property) {
|
||||
TEST_P(ov_compiled_model_test, get_property) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -176,14 +184,14 @@ TEST_P(ov_compiled_model, get_property) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, create_compiled_model_with_property) {
|
||||
TEST_P(ov_compiled_model_test, create_compiled_model_with_property) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
const char* key = ov_property_key_hint_performance_mode;
|
||||
@ -201,14 +209,14 @@ TEST_P(ov_compiled_model, create_compiled_model_with_property) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_outputs_size) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_outputs_size) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -224,14 +232,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_outputs_size) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_output) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_output) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -248,14 +256,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_output) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_output_by_index) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_output_by_index) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -276,14 +284,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_output_by_index) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, ov_compiled_model_output_by_name) {
|
||||
TEST_P(ov_compiled_model_test, ov_compiled_model_output_by_name) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -291,7 +299,7 @@ TEST_P(ov_compiled_model, ov_compiled_model_output_by_name) {
|
||||
EXPECT_NE(nullptr, compiled_model);
|
||||
|
||||
ov_output_const_port_t* output_port = nullptr;
|
||||
OV_EXPECT_OK(ov_compiled_model_output_by_name(compiled_model, "fc_out", &output_port));
|
||||
OV_EXPECT_OK(ov_compiled_model_output_by_name(compiled_model, "relu", &output_port));
|
||||
EXPECT_NE(nullptr, output_port);
|
||||
|
||||
ov_shape_t shape;
|
||||
@ -304,14 +312,14 @@ TEST_P(ov_compiled_model, ov_compiled_model_output_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, get_runtime_model) {
|
||||
TEST_P(ov_compiled_model_test, get_runtime_model) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -328,14 +336,14 @@ TEST_P(ov_compiled_model, get_runtime_model) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, get_runtime_model_error_handling) {
|
||||
TEST_P(ov_compiled_model_test, get_runtime_model_error_handling) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -352,14 +360,14 @@ TEST_P(ov_compiled_model, get_runtime_model_error_handling) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, create_infer_request) {
|
||||
TEST_P(ov_compiled_model_test, create_infer_request) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -376,14 +384,14 @@ TEST_P(ov_compiled_model, create_infer_request) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_compiled_model, create_infer_request_error_handling) {
|
||||
TEST_P(ov_compiled_model_test, create_infer_request_error_handling) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
|
@ -22,10 +22,19 @@ TEST(ov_util, ov_get_error_info_check) {
|
||||
EXPECT_STREQ(res, str);
|
||||
}
|
||||
|
||||
class ov_core : public ::testing::TestWithParam<std::string> {};
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_core, ::testing::Values("CPU"));
|
||||
class ov_core_test : public ov_capi_test_base {
|
||||
public:
|
||||
void SetUp() override {
|
||||
ov_capi_test_base::SetUp();
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_create_with_config) {
|
||||
void TearDown() override {
|
||||
ov_capi_test_base::TearDown();
|
||||
}
|
||||
};
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_core_test, ::testing::Values("CPU"));
|
||||
|
||||
TEST_P(ov_core_test, ov_core_create_with_config) {
|
||||
std::string plugins_xml = TestDataHelpers::generate_test_xml_file();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create_with_config(plugins_xml.c_str(), &core));
|
||||
@ -34,45 +43,45 @@ TEST(ov_core, ov_core_create_with_config) {
|
||||
TestDataHelpers::delete_test_xml_file();
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_create_with_no_config) {
|
||||
TEST_P(ov_core_test, ov_core_create_with_no_config) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_read_model) {
|
||||
TEST_P(ov_core_test, ov_core_read_model) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_model_free(model);
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_read_model_no_bin) {
|
||||
TEST_P(ov_core_test, ov_core_read_model_no_bin) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, nullptr, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), nullptr, &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_model_free(model);
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_read_model_from_memory) {
|
||||
TEST_P(ov_core_test, ov_core_read_model_from_memory) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
std::vector<uint8_t> weights_content(content_from_file(bin, true));
|
||||
std::vector<uint8_t> weights_content(content_from_file(bin_file_name.c_str(), true));
|
||||
|
||||
ov_tensor_t* tensor = nullptr;
|
||||
ov_shape_t shape;
|
||||
@ -81,7 +90,7 @@ TEST(ov_core, ov_core_read_model_from_memory) {
|
||||
OV_EXPECT_OK(ov_tensor_create_from_host_ptr(ov_element_type_e::U8, shape, weights_content.data(), &tensor));
|
||||
EXPECT_NE(nullptr, tensor);
|
||||
|
||||
std::vector<uint8_t> xml_content(content_from_file(xml, false));
|
||||
std::vector<uint8_t> xml_content(content_from_file(xml_file_name.c_str(), false));
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(
|
||||
ov_core_read_model_from_memory(core, reinterpret_cast<const char*>(xml_content.data()), tensor, &model));
|
||||
@ -93,14 +102,14 @@ TEST(ov_core, ov_core_read_model_from_memory) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_compile_model) {
|
||||
TEST_P(ov_core_test, ov_core_compile_model) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, nullptr, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), nullptr, &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -112,14 +121,14 @@ TEST_P(ov_core, ov_core_compile_model) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_compile_model_with_property) {
|
||||
TEST_P(ov_core_test, ov_core_compile_model_with_property) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, nullptr, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), nullptr, &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -138,14 +147,14 @@ TEST_P(ov_core, ov_core_compile_model_with_property) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_compile_model_with_property_invalid) {
|
||||
TEST_P(ov_core_test, ov_core_compile_model_with_property_invalid) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, nullptr, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), nullptr, &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
@ -159,21 +168,21 @@ TEST_P(ov_core, ov_core_compile_model_with_property_invalid) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_compile_model_from_file) {
|
||||
TEST_P(ov_core_test, ov_core_compile_model_from_file) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_compile_model_from_file(core, xml, device_name.c_str(), 0, &compiled_model));
|
||||
OV_EXPECT_OK(ov_core_compile_model_from_file(core, xml_file_name.c_str(), device_name.c_str(), 0, &compiled_model));
|
||||
EXPECT_NE(nullptr, compiled_model);
|
||||
|
||||
ov_compiled_model_free(compiled_model);
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_property_enum) {
|
||||
TEST_P(ov_core_test, ov_core_set_property_enum) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -186,7 +195,7 @@ TEST_P(ov_core, ov_core_set_property_enum) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_property_invalid_number_property_arguments) {
|
||||
TEST_P(ov_core_test, ov_core_set_property_invalid_number_property_arguments) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -209,7 +218,7 @@ TEST_P(ov_core, ov_core_set_property_invalid_number_property_arguments) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_property_enum_invalid) {
|
||||
TEST_P(ov_core_test, ov_core_set_property_enum_invalid) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -232,7 +241,7 @@ TEST_P(ov_core, ov_core_set_property_enum_invalid) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_and_get_property_enum) {
|
||||
TEST_P(ov_core_test, ov_core_set_and_get_property_enum) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -249,7 +258,7 @@ TEST_P(ov_core, ov_core_set_and_get_property_enum) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_and_get_property_bool) {
|
||||
TEST_P(ov_core_test, ov_core_set_and_get_property_bool) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -266,7 +275,7 @@ TEST_P(ov_core, ov_core_set_and_get_property_bool) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_and_get_property_bool_invalid) {
|
||||
TEST_P(ov_core_test, ov_core_set_and_get_property_bool_invalid) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -284,7 +293,7 @@ TEST_P(ov_core, ov_core_set_and_get_property_bool_invalid) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_get_property) {
|
||||
TEST_P(ov_core_test, ov_core_get_property) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -297,7 +306,7 @@ TEST_P(ov_core, ov_core_get_property) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_get_property_str) {
|
||||
TEST_P(ov_core_test, ov_core_set_get_property_str) {
|
||||
#ifdef __aarch64__
|
||||
GTEST_SKIP() << "Skip this test for ARM CPU for now, cause no string property supported";
|
||||
#endif
|
||||
@ -319,7 +328,7 @@ TEST_P(ov_core, ov_core_set_get_property_str) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_get_property_int) {
|
||||
TEST_P(ov_core_test, ov_core_set_get_property_int) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -337,7 +346,7 @@ TEST_P(ov_core, ov_core_set_get_property_int) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_property_int_invalid) {
|
||||
TEST_P(ov_core_test, ov_core_set_property_int_invalid) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -353,7 +362,7 @@ TEST_P(ov_core, ov_core_set_property_int_invalid) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_set_multiple_common_properties) {
|
||||
TEST_P(ov_core_test, ov_core_set_multiple_common_properties) {
|
||||
#ifdef __aarch64__
|
||||
GTEST_SKIP() << "Skip this test for ARM CPU for now, cause no string property supported";
|
||||
#endif
|
||||
@ -404,7 +413,7 @@ TEST_P(ov_core, ov_core_set_multiple_common_properties) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_get_available_devices) {
|
||||
TEST_P(ov_core_test, ov_core_get_available_devices) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
@ -416,7 +425,7 @@ TEST(ov_core, ov_core_get_available_devices) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_compiled_model_export_model) {
|
||||
TEST_P(ov_core_test, ov_compiled_model_export_model) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -429,17 +438,17 @@ TEST_P(ov_core, ov_compiled_model_export_model) {
|
||||
}
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_compile_model_from_file(core, xml, device_name.c_str(), 0, &compiled_model));
|
||||
OV_EXPECT_OK(ov_core_compile_model_from_file(core, xml_file_name.c_str(), device_name.c_str(), 0, &compiled_model));
|
||||
EXPECT_NE(nullptr, compiled_model);
|
||||
|
||||
std::string export_path = TestDataHelpers::generate_model_path("test_model", "exported_model.blob");
|
||||
std::string export_path = TestDataHelpers::get_exported_blob_file_name();
|
||||
OV_EXPECT_OK(ov_compiled_model_export_model(compiled_model, export_path.c_str()));
|
||||
|
||||
ov_compiled_model_free(compiled_model);
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_import_model) {
|
||||
TEST_P(ov_core_test, ov_core_import_model) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
|
||||
@ -453,10 +462,10 @@ TEST_P(ov_core, ov_core_import_model) {
|
||||
}
|
||||
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_compile_model_from_file(core, xml, device_name.c_str(), 0, &compiled_model));
|
||||
OV_EXPECT_OK(ov_core_compile_model_from_file(core, xml_file_name.c_str(), device_name.c_str(), 0, &compiled_model));
|
||||
EXPECT_NE(nullptr, compiled_model);
|
||||
|
||||
std::string export_path = TestDataHelpers::generate_model_path("test_model", "exported_model.blob");
|
||||
std::string export_path = TestDataHelpers::get_exported_blob_file_name();
|
||||
OV_EXPECT_OK(ov_compiled_model_export_model(compiled_model, export_path.c_str()));
|
||||
ov_compiled_model_free(compiled_model);
|
||||
|
||||
@ -472,7 +481,7 @@ TEST_P(ov_core, ov_core_import_model) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_get_versions_by_device_name) {
|
||||
TEST_P(ov_core_test, ov_core_get_versions_by_device_name) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -495,14 +504,14 @@ const std::vector<std::wstring> test_unicode_postfix_vector = {L"unicode_Яㅎ
|
||||
L"그것이정당하다",
|
||||
L"АБВГДЕЁЖЗИЙ",
|
||||
L"СТУФХЦЧШЩЬЮЯ"};
|
||||
TEST(ov_core, ov_core_create_with_config_unicode) {
|
||||
TEST_P(ov_core_test, ov_core_create_with_config_unicode) {
|
||||
std::string plugins_xml = TestDataHelpers::generate_test_xml_file();
|
||||
ov_core_t* core = nullptr;
|
||||
|
||||
for (std::size_t index = 0; index < test_unicode_postfix_vector.size(); index++) {
|
||||
std::wstring postfix = L"_" + test_unicode_postfix_vector[index];
|
||||
std::wstring plugins_xml_ws = add_unicode_postfix_to_path(plugins_xml, postfix);
|
||||
ASSERT_EQ(true, copy_file(plugins_xml, plugins_xml_ws));
|
||||
std::wstring plugins_xml_ws = add_unicode_postfix_to_path(plugins_xml.c_str(), postfix);
|
||||
ASSERT_EQ(true, copy_file(plugins_xml.c_str(), plugins_xml_ws));
|
||||
|
||||
OV_EXPECT_OK(ov_core_create_with_config_unicode(plugins_xml_ws.c_str(), &core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
@ -512,7 +521,7 @@ TEST(ov_core, ov_core_create_with_config_unicode) {
|
||||
TestDataHelpers::delete_test_xml_file();
|
||||
}
|
||||
|
||||
TEST(ov_core, ov_core_read_model_unicode) {
|
||||
TEST_P(ov_core_test, ov_core_read_model_unicode) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
@ -520,11 +529,11 @@ TEST(ov_core, ov_core_read_model_unicode) {
|
||||
ov_model_t* model = nullptr;
|
||||
for (std::size_t index = 0; index < test_unicode_postfix_vector.size(); index++) {
|
||||
std::wstring postfix = L"_" + test_unicode_postfix_vector[index];
|
||||
std::wstring xml_ws = add_unicode_postfix_to_path(xml, postfix);
|
||||
std::wstring bin_ws = add_unicode_postfix_to_path(bin, postfix);
|
||||
std::wstring xml_ws = add_unicode_postfix_to_path(xml_file_name.c_str(), postfix);
|
||||
std::wstring bin_ws = add_unicode_postfix_to_path(bin_file_name.c_str(), postfix);
|
||||
|
||||
ASSERT_EQ(true, copy_file(xml, xml_ws));
|
||||
ASSERT_EQ(true, copy_file(bin, bin_ws));
|
||||
ASSERT_EQ(true, copy_file(xml_file_name.c_str(), xml_ws));
|
||||
ASSERT_EQ(true, copy_file(bin_file_name.c_str(), bin_ws));
|
||||
|
||||
OV_EXPECT_OK(ov_core_read_model_unicode(core, xml_ws.c_str(), bin_ws.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
@ -537,7 +546,7 @@ TEST(ov_core, ov_core_read_model_unicode) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST_P(ov_core, ov_core_compile_model_from_file_unicode) {
|
||||
TEST_P(ov_core_test, ov_core_compile_model_from_file_unicode) {
|
||||
auto device_name = GetParam();
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
@ -546,10 +555,10 @@ TEST_P(ov_core, ov_core_compile_model_from_file_unicode) {
|
||||
ov_compiled_model_t* compiled_model = nullptr;
|
||||
for (std::size_t index = 0; index < test_unicode_postfix_vector.size(); index++) {
|
||||
std::wstring postfix = L"_" + test_unicode_postfix_vector[index];
|
||||
std::wstring xml_ws = add_unicode_postfix_to_path(xml, postfix);
|
||||
std::wstring bin_ws = add_unicode_postfix_to_path(bin, postfix);
|
||||
ASSERT_EQ(true, copy_file(xml, xml_ws));
|
||||
ASSERT_EQ(true, copy_file(bin, bin_ws));
|
||||
std::wstring xml_ws = add_unicode_postfix_to_path(xml_file_name.c_str(), postfix);
|
||||
std::wstring bin_ws = add_unicode_postfix_to_path(bin_file_name.c_str(), postfix);
|
||||
ASSERT_EQ(true, copy_file(xml_file_name.c_str(), xml_ws));
|
||||
ASSERT_EQ(true, copy_file(bin_file_name.c_str(), bin_ws));
|
||||
|
||||
OV_EXPECT_OK(
|
||||
ov_core_compile_model_from_file_unicode(core, xml_ws.c_str(), device_name.c_str(), 0, &compiled_model));
|
||||
|
@ -30,7 +30,7 @@ inline void get_tensor_info(ov_model_t* model, bool input, char** name, ov_shape
|
||||
ov_output_const_port_free(port);
|
||||
}
|
||||
|
||||
class ov_infer_request : public ::testing::TestWithParam<std::string> {
|
||||
class ov_infer_request_test : public ov_capi_test_base {
|
||||
protected:
|
||||
void SetUp() override {
|
||||
auto device_name = GetParam();
|
||||
@ -43,11 +43,12 @@ protected:
|
||||
infer_request = nullptr;
|
||||
input_const_port = nullptr;
|
||||
input_port = nullptr;
|
||||
ov_capi_test_base::SetUp();
|
||||
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
OV_EXPECT_OK(ov_model_const_input(model, &input_const_port));
|
||||
@ -81,6 +82,7 @@ protected:
|
||||
ov_compiled_model_free(compiled_model);
|
||||
ov_model_free(model);
|
||||
ov_core_free(core);
|
||||
ov_capi_test_base::TearDown();
|
||||
}
|
||||
|
||||
public:
|
||||
@ -97,11 +99,11 @@ public:
|
||||
static bool ready;
|
||||
static std::condition_variable condVar;
|
||||
};
|
||||
bool ov_infer_request::ready = false;
|
||||
std::mutex ov_infer_request::m;
|
||||
std::condition_variable ov_infer_request::condVar;
|
||||
bool ov_infer_request_test::ready = false;
|
||||
std::mutex ov_infer_request_test::m;
|
||||
std::condition_variable ov_infer_request_test::condVar;
|
||||
|
||||
class ov_infer_request_ppp : public ::testing::TestWithParam<std::string> {
|
||||
class ov_infer_request_ppp : public ov_capi_test_base {
|
||||
protected:
|
||||
void SetUp() override {
|
||||
auto device_name = GetParam();
|
||||
@ -118,11 +120,12 @@ protected:
|
||||
ov_layout_t* model_layout = nullptr;
|
||||
compiled_model = nullptr;
|
||||
infer_request = nullptr;
|
||||
ov_capi_test_base::SetUp();
|
||||
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
@ -182,6 +185,7 @@ protected:
|
||||
ov_preprocess_prepostprocessor_free(preprocess);
|
||||
ov_model_free(model);
|
||||
ov_core_free(core);
|
||||
ov_capi_test_base::TearDown();
|
||||
}
|
||||
|
||||
public:
|
||||
@ -198,83 +202,83 @@ public:
|
||||
ov_preprocess_input_model_info_t* input_model;
|
||||
};
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_infer_request, ::testing::Values("CPU"));
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_infer_request_test, ::testing::Values("CPU"));
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_infer_request_ppp, ::testing::Values("CPU"));
|
||||
|
||||
TEST_P(ov_infer_request, set_tensor) {
|
||||
TEST_P(ov_infer_request_test, set_tensor) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_input_tensor_by_index) {
|
||||
TEST_P(ov_infer_request_test, set_input_tensor_by_index) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_input_tensor_by_index(infer_request, 0, input_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_tensor_by_port) {
|
||||
TEST_P(ov_infer_request_test, set_tensor_by_port) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_tensor_by_port(infer_request, input_port, input_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_tensor_by_const_port) {
|
||||
TEST_P(ov_infer_request_test, set_tensor_by_const_port) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_tensor_by_const_port(infer_request, input_const_port, input_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_input_tensor) {
|
||||
TEST_P(ov_infer_request_test, set_input_tensor) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_input_tensor(infer_request, input_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_output_tensor_by_index) {
|
||||
TEST_P(ov_infer_request_test, set_output_tensor_by_index) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_output_tensor_by_index(infer_request, 0, &output_tensor));
|
||||
EXPECT_NE(nullptr, output_tensor);
|
||||
OV_EXPECT_OK(ov_infer_request_set_output_tensor_by_index(infer_request, 0, output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_output_tensor) {
|
||||
TEST_P(ov_infer_request_test, set_output_tensor) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_output_tensor_by_index(infer_request, 0, &output_tensor));
|
||||
EXPECT_NE(nullptr, output_tensor);
|
||||
OV_EXPECT_OK(ov_infer_request_set_output_tensor(infer_request, output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, set_tensor_error_handling) {
|
||||
TEST_P(ov_infer_request_test, set_tensor_error_handling) {
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_set_tensor(nullptr, in_tensor_name, input_tensor));
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_set_tensor(infer_request, nullptr, input_tensor));
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, nullptr));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_tensor) {
|
||||
TEST_P(ov_infer_request_test, get_tensor) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_tensor(infer_request, in_tensor_name, &input_tensor));
|
||||
EXPECT_NE(nullptr, input_tensor);
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_input_tensor_by_index) {
|
||||
TEST_P(ov_infer_request_test, get_input_tensor_by_index) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_input_tensor_by_index(infer_request, 0, &output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_tensor_by_const_port) {
|
||||
TEST_P(ov_infer_request_test, get_tensor_by_const_port) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_tensor_by_const_port(infer_request, input_const_port, &output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_tensor_by_port) {
|
||||
TEST_P(ov_infer_request_test, get_tensor_by_port) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_tensor_by_port(infer_request, input_port, &output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_input_tensor) {
|
||||
TEST_P(ov_infer_request_test, get_input_tensor) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_input_tensor(infer_request, &output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_output_tensor_by_index) {
|
||||
TEST_P(ov_infer_request_test, get_output_tensor_by_index) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_output_tensor_by_index(infer_request, 0, &output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_output_tensor) {
|
||||
TEST_P(ov_infer_request_test, get_output_tensor) {
|
||||
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, &output_tensor));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_tensor_error_handling) {
|
||||
TEST_P(ov_infer_request_test, get_tensor_error_handling) {
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_get_tensor(nullptr, in_tensor_name, &input_tensor));
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_get_tensor(infer_request, nullptr, &input_tensor));
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_get_tensor(infer_request, in_tensor_name, nullptr));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, infer) {
|
||||
TEST_P(ov_infer_request_test, infer) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
|
||||
|
||||
OV_ASSERT_OK(ov_infer_request_infer(infer_request));
|
||||
@ -291,7 +295,7 @@ TEST_P(ov_infer_request, infer) {
|
||||
ov_free(out_tensor_name);
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, cancel) {
|
||||
TEST_P(ov_infer_request_test, cancel) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
|
||||
|
||||
OV_EXPECT_OK(ov_infer_request_cancel(infer_request));
|
||||
@ -306,11 +310,11 @@ TEST_P(ov_infer_request_ppp, infer_ppp) {
|
||||
EXPECT_NE(nullptr, output_tensor);
|
||||
}
|
||||
|
||||
TEST(ov_infer_request, infer_error_handling) {
|
||||
TEST_P(ov_infer_request_test, infer_error_handling) {
|
||||
OV_EXPECT_NOT_OK(ov_infer_request_infer(nullptr));
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, infer_async) {
|
||||
TEST_P(ov_infer_request_test, infer_async) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_input_tensor_by_index(infer_request, 0, input_tensor));
|
||||
|
||||
OV_ASSERT_OK(ov_infer_request_start_async(infer_request));
|
||||
@ -323,7 +327,7 @@ TEST_P(ov_infer_request, infer_async) {
|
||||
}
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, infer_async_wait_for) {
|
||||
TEST_P(ov_infer_request_test, infer_async_wait_for) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_input_tensor_by_index(infer_request, 0, input_tensor));
|
||||
|
||||
OV_ASSERT_OK(ov_infer_request_start_async(infer_request));
|
||||
@ -358,12 +362,12 @@ inline void infer_request_callback(void* args) {
|
||||
|
||||
ov_tensor_free(out_tensor);
|
||||
|
||||
std::lock_guard<std::mutex> lock(ov_infer_request::m);
|
||||
ov_infer_request::ready = true;
|
||||
ov_infer_request::condVar.notify_one();
|
||||
std::lock_guard<std::mutex> lock(ov_infer_request_test::m);
|
||||
ov_infer_request_test::ready = true;
|
||||
ov_infer_request_test::condVar.notify_one();
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, infer_request_set_callback) {
|
||||
TEST_P(ov_infer_request_test, infer_request_set_callback) {
|
||||
OV_EXPECT_OK(ov_infer_request_set_input_tensor_by_index(infer_request, 0, input_tensor));
|
||||
|
||||
ov_callback_t callback;
|
||||
@ -375,14 +379,14 @@ TEST_P(ov_infer_request, infer_request_set_callback) {
|
||||
OV_ASSERT_OK(ov_infer_request_start_async(infer_request));
|
||||
|
||||
if (!HasFatalFailure()) {
|
||||
std::unique_lock<std::mutex> lock(ov_infer_request::m);
|
||||
ov_infer_request::condVar.wait(lock, [] {
|
||||
return ov_infer_request::ready;
|
||||
std::unique_lock<std::mutex> lock(ov_infer_request_test::m);
|
||||
ov_infer_request_test::condVar.wait(lock, [] {
|
||||
return ov_infer_request_test::ready;
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
TEST_P(ov_infer_request, get_profiling_info) {
|
||||
TEST_P(ov_infer_request_test, get_profiling_info) {
|
||||
auto device_name = GetParam();
|
||||
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
|
||||
|
||||
|
@ -3,13 +3,25 @@
|
||||
//
|
||||
#include "ov_test.hpp"
|
||||
|
||||
TEST(ov_model, ov_model_const_input) {
|
||||
class ov_model_test : public ov_capi_test_base {
|
||||
void SetUp() override {
|
||||
ov_capi_test_base::SetUp();
|
||||
}
|
||||
|
||||
void TearDown() override {
|
||||
ov_capi_test_base::TearDown();
|
||||
}
|
||||
};
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(device_name, ov_model_test, ::testing::Values(""));
|
||||
|
||||
TEST_P(ov_model_test, ov_model_const_input) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* input_port = nullptr;
|
||||
@ -21,13 +33,13 @@ TEST(ov_model, ov_model_const_input) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_const_input_by_name) {
|
||||
TEST_P(ov_model_test, ov_model_const_input_by_name) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* input_port = nullptr;
|
||||
@ -43,13 +55,13 @@ TEST(ov_model, ov_model_const_input_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_const_input_by_index) {
|
||||
TEST_P(ov_model_test, ov_model_const_input_by_index) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* input_port = nullptr;
|
||||
@ -65,13 +77,13 @@ TEST(ov_model, ov_model_const_input_by_index) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_input) {
|
||||
TEST_P(ov_model_test, ov_model_input) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* input_port = nullptr;
|
||||
@ -83,13 +95,13 @@ TEST(ov_model, ov_model_input) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_input_by_name) {
|
||||
TEST_P(ov_model_test, ov_model_input_by_name) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* input_port = nullptr;
|
||||
@ -105,13 +117,13 @@ TEST(ov_model, ov_model_input_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_input_by_index) {
|
||||
TEST_P(ov_model_test, ov_model_input_by_index) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* input_port = nullptr;
|
||||
@ -127,13 +139,13 @@ TEST(ov_model, ov_model_input_by_index) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_const_output) {
|
||||
TEST_P(ov_model_test, ov_model_const_output) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* output_port = nullptr;
|
||||
@ -145,13 +157,13 @@ TEST(ov_model, ov_model_const_output) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_const_output_by_index) {
|
||||
TEST_P(ov_model_test, ov_model_const_output_by_index) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* output_port = nullptr;
|
||||
@ -167,17 +179,17 @@ TEST(ov_model, ov_model_const_output_by_index) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_const_output_by_name) {
|
||||
TEST_P(ov_model_test, ov_model_const_output_by_name) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* output_port = nullptr;
|
||||
OV_EXPECT_OK(ov_model_const_output_by_name(model, "fc_out", &output_port));
|
||||
OV_EXPECT_OK(ov_model_const_output_by_name(model, "relu", &output_port));
|
||||
EXPECT_NE(nullptr, output_port);
|
||||
|
||||
ov_shape_t shape;
|
||||
@ -189,13 +201,13 @@ TEST(ov_model, ov_model_const_output_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_output) {
|
||||
TEST_P(ov_model_test, ov_model_output) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* output_port = nullptr;
|
||||
@ -207,13 +219,13 @@ TEST(ov_model, ov_model_output) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_output_by_index) {
|
||||
TEST_P(ov_model_test, ov_model_output_by_index) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* output_port = nullptr;
|
||||
@ -229,17 +241,17 @@ TEST(ov_model, ov_model_output_by_index) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_output_by_name) {
|
||||
TEST_P(ov_model_test, ov_model_output_by_name) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* output_port = nullptr;
|
||||
OV_EXPECT_OK(ov_model_output_by_name(model, "fc_out", &output_port));
|
||||
OV_EXPECT_OK(ov_model_output_by_name(model, "relu", &output_port));
|
||||
EXPECT_NE(nullptr, output_port);
|
||||
|
||||
ov_shape_t shape;
|
||||
@ -251,13 +263,13 @@ TEST(ov_model, ov_model_output_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_inputs_size) {
|
||||
TEST_P(ov_model_test, ov_model_inputs_size) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
size_t input_size;
|
||||
@ -268,13 +280,13 @@ TEST(ov_model, ov_model_inputs_size) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_outputs_size) {
|
||||
TEST_P(ov_model_test, ov_model_outputs_size) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
size_t output_size;
|
||||
@ -285,13 +297,13 @@ TEST(ov_model, ov_model_outputs_size) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_is_dynamic) {
|
||||
TEST_P(ov_model_test, ov_model_is_dynamic) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
EXPECT_NO_THROW(ov_model_is_dynamic(model));
|
||||
@ -300,13 +312,13 @@ TEST(ov_model, ov_model_is_dynamic) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_reshape_input_by_name) {
|
||||
TEST_P(ov_model_test, ov_model_reshape_input_by_name) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* input_port_1 = nullptr;
|
||||
@ -339,13 +351,13 @@ TEST(ov_model, ov_model_reshape_input_by_name) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_reshape) {
|
||||
TEST_P(ov_model_test, ov_model_reshape) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_const_port_t* input_port_1 = nullptr;
|
||||
@ -379,13 +391,13 @@ TEST(ov_model, ov_model_reshape) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_reshape_by_port_indexes) {
|
||||
TEST_P(ov_model_test, ov_model_reshape_by_port_indexes) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
size_t port_indexs[] = {0};
|
||||
@ -409,13 +421,13 @@ TEST(ov_model, ov_model_reshape_by_port_indexes) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_reshape_single_input) {
|
||||
TEST_P(ov_model_test, ov_model_reshape_single_input) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_shape_t shape = {0, nullptr};
|
||||
@ -437,13 +449,13 @@ TEST(ov_model, ov_model_reshape_single_input) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_reshape_by_ports) {
|
||||
TEST_P(ov_model_test, ov_model_reshape_by_ports) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
ov_output_port_t* input_port_1 = nullptr;
|
||||
@ -471,13 +483,13 @@ TEST(ov_model, ov_model_reshape_by_ports) {
|
||||
ov_core_free(core);
|
||||
}
|
||||
|
||||
TEST(ov_model, ov_model_get_friendly_name) {
|
||||
TEST_P(ov_model_test, ov_model_get_friendly_name) {
|
||||
ov_core_t* core = nullptr;
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
ov_model_t* model = nullptr;
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
char* friendly_name = nullptr;
|
||||
|
@ -3,7 +3,7 @@
|
||||
//
|
||||
#include "ov_test.hpp"
|
||||
|
||||
class ov_preprocess : public ::testing::Test {
|
||||
class ov_preprocess_test : public ::testing::Test {
|
||||
protected:
|
||||
void SetUp() override {
|
||||
core = nullptr;
|
||||
@ -17,10 +17,14 @@ protected:
|
||||
output_tensor_info = nullptr;
|
||||
input_model = nullptr;
|
||||
|
||||
TestDataHelpers::generate_test_model();
|
||||
xml_file_name = TestDataHelpers::get_model_xml_file_name();
|
||||
bin_file_name = TestDataHelpers::get_model_bin_file_name();
|
||||
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
}
|
||||
void TearDown() override {
|
||||
@ -34,6 +38,7 @@ protected:
|
||||
ov_preprocess_prepostprocessor_free(preprocess);
|
||||
ov_model_free(model);
|
||||
ov_core_free(core);
|
||||
TestDataHelpers::release_test_model();
|
||||
}
|
||||
|
||||
public:
|
||||
@ -47,14 +52,15 @@ public:
|
||||
ov_preprocess_output_info_t* output_info;
|
||||
ov_preprocess_output_tensor_info_t* output_tensor_info;
|
||||
ov_preprocess_input_model_info_t* input_model;
|
||||
std::string xml_file_name, bin_file_name;
|
||||
};
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_create) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_create) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_input_info) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_get_input_info) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -62,7 +68,7 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_input_info) {
|
||||
EXPECT_NE(nullptr, input_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_input_info_by_name) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_get_input_info_by_name) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -70,7 +76,7 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_input_info_by_name) {
|
||||
EXPECT_NE(nullptr, input_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_input_info_by_index) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_get_input_info_by_index) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -78,7 +84,7 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_input_info_by_index) {
|
||||
EXPECT_NE(nullptr, input_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_info_get_tensor_info) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_info_get_tensor_info) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -89,7 +95,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_info_get_tensor_info) {
|
||||
EXPECT_NE(nullptr, input_tensor_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_info_get_preprocess_steps) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_info_get_preprocess_steps) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -100,7 +106,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_info_get_preprocess_steps) {
|
||||
EXPECT_NE(nullptr, input_process);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_resize) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_resize) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -113,7 +119,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_resize) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_resize(input_process, ov_preprocess_resize_algorithm_e::RESIZE_LINEAR));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_scale) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_scale) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -126,7 +132,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_scale) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_scale(input_process, 2.0f));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_mean) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_mean) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -139,7 +145,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_mean) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_mean(input_process, 2.0f));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_crop) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_crop) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -154,7 +160,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_crop) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_crop(input_process, begin, 4, end, 4));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_layout) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_convert_layout) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -172,7 +178,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_layout) {
|
||||
ov_layout_free(layout);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_reverse_channels) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_reverse_channels) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -185,7 +191,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_reverse_channels) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_reverse_channels(input_process));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_element_type) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_tensor_info_set_element_type) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -198,7 +204,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_element_type) {
|
||||
OV_EXPECT_OK(ov_preprocess_input_tensor_info_set_element_type(input_tensor_info, ov_element_type_e::F32));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_from) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_tensor_info_set_from) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -218,7 +224,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_from) {
|
||||
ov_shape_free(&shape);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_layout) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_tensor_info_set_layout) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -236,7 +242,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_layout) {
|
||||
ov_layout_free(layout);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_color_format) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_tensor_info_set_color_format) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -250,7 +256,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_color_format) {
|
||||
ov_preprocess_input_tensor_info_set_color_format(input_tensor_info, ov_color_format_e::NV12_SINGLE_PLANE));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_spatial_static_shape) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_tensor_info_set_spatial_static_shape) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -266,7 +272,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_tensor_info_set_spatial_static_shape)
|
||||
ov_preprocess_input_tensor_info_set_spatial_static_shape(input_tensor_info, input_height, input_width));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_element_type) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_convert_element_type) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -283,7 +289,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_element_type) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_convert_element_type(input_process, ov_element_type_e::F32));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_color) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_convert_color) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -304,7 +310,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_color) {
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_convert_color(input_process, ov_color_format_e::BGR));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_color_rgb_to_gray) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_preprocess_steps_convert_color_rgb_to_gray) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -321,7 +327,7 @@ TEST_F(ov_preprocess, ov_preprocess_preprocess_steps_convert_color_rgb_to_gray)
|
||||
OV_EXPECT_OK(ov_preprocess_preprocess_steps_convert_color(input_process, ov_color_format_e::GRAY));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_output_info) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_get_output_info) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -329,7 +335,7 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_output_info) {
|
||||
EXPECT_NE(nullptr, output_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_output_info_by_index) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_get_output_info_by_index) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -337,15 +343,15 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_output_info_by_index) {
|
||||
EXPECT_NE(nullptr, output_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_get_output_info_by_name) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_get_output_info_by_name) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_get_output_info_by_name(preprocess, "fc_out", &output_info));
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_get_output_info_by_name(preprocess, "relu", &output_info));
|
||||
EXPECT_NE(nullptr, output_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_output_info_get_tensor_info) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_output_info_get_tensor_info) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -356,7 +362,7 @@ TEST_F(ov_preprocess, ov_preprocess_output_info_get_tensor_info) {
|
||||
EXPECT_NE(nullptr, output_tensor_info);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_output_set_element_type) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_output_set_element_type) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -369,7 +375,7 @@ TEST_F(ov_preprocess, ov_preprocess_output_set_element_type) {
|
||||
OV_EXPECT_OK(ov_preprocess_output_set_element_type(output_tensor_info, ov_element_type_e::F32));
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_info_get_model_info) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_info_get_model_info) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -380,7 +386,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_info_get_model_info) {
|
||||
EXPECT_NE(nullptr, input_model);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_input_model_info_set_layout) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_input_model_info_set_layout) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -398,7 +404,7 @@ TEST_F(ov_preprocess, ov_preprocess_input_model_info_set_layout) {
|
||||
ov_layout_free(layout);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_build) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_build) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -409,7 +415,7 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_build) {
|
||||
ov_model_free(new_model);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_build_apply) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_build_apply) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
@ -459,7 +465,7 @@ TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_build_apply) {
|
||||
ov_model_free(new_model);
|
||||
}
|
||||
|
||||
TEST_F(ov_preprocess, ov_preprocess_prepostprocessor_for_nv12_input) {
|
||||
TEST_F(ov_preprocess_test, ov_preprocess_prepostprocessor_for_nv12_input) {
|
||||
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
|
||||
EXPECT_NE(nullptr, preprocess);
|
||||
|
||||
|
@ -5,7 +5,7 @@
|
||||
#include "openvino/runtime/intel_gpu/ocl/ocl_wrapper.hpp"
|
||||
#include "ov_test.hpp"
|
||||
|
||||
class ov_remote_context_ocl : public ::testing::TestWithParam<std::string> {
|
||||
class ov_remote_context_ocl : public ov_capi_test_base {
|
||||
protected:
|
||||
void SetUp() override {
|
||||
core = nullptr;
|
||||
@ -16,11 +16,12 @@ protected:
|
||||
remote_tensor = nullptr;
|
||||
out_tensor_name = nullptr;
|
||||
in_tensor_name = nullptr;
|
||||
ov_capi_test_base::SetUp();
|
||||
|
||||
OV_EXPECT_OK(ov_core_create(&core));
|
||||
EXPECT_NE(nullptr, core);
|
||||
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
|
||||
OV_EXPECT_OK(ov_core_read_model(core, xml_file_name.c_str(), bin_file_name.c_str(), &model));
|
||||
EXPECT_NE(nullptr, model);
|
||||
|
||||
char* info = nullptr;
|
||||
@ -68,6 +69,7 @@ protected:
|
||||
ov_free(in_tensor_name);
|
||||
ov_remote_context_free(context);
|
||||
ov_core_free(core);
|
||||
ov_capi_test_base::TearDown();
|
||||
}
|
||||
|
||||
public:
|
||||
|
@ -3,14 +3,6 @@
|
||||
//
|
||||
#include "ov_test.hpp"
|
||||
|
||||
#include "test_model_repo.hpp"
|
||||
|
||||
std::string xml_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.xml");
|
||||
std::string bin_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.bin");
|
||||
|
||||
const char* xml = xml_std.c_str();
|
||||
const char* bin = bin_std.c_str();
|
||||
|
||||
std::map<ov_element_type_e, size_t> element_type_size_map = {{ov_element_type_e::BOOLEAN, 8},
|
||||
{ov_element_type_e::BF16, 16},
|
||||
{ov_element_type_e::F16, 16},
|
||||
|
@ -11,11 +11,7 @@
|
||||
|
||||
#include "openvino/c/openvino.h"
|
||||
#include "openvino/openvino.hpp"
|
||||
|
||||
extern const char* xml;
|
||||
extern const char* bin;
|
||||
extern const char* input_image;
|
||||
extern const char* input_image_nv12;
|
||||
#include "test_model_repo.hpp"
|
||||
|
||||
#define OV_EXPECT_OK(...) EXPECT_EQ(ov_status_e::OK, __VA_ARGS__)
|
||||
#define OV_ASSERT_OK(...) ASSERT_EQ(ov_status_e::OK, __VA_ARGS__)
|
||||
@ -40,6 +36,22 @@ extern const char* input_image_nv12;
|
||||
extern std::map<ov_element_type_e, size_t> element_type_size_map;
|
||||
#define GET_ELEMENT_TYPE_SIZE(a) element_type_size_map[a]
|
||||
|
||||
class ov_capi_test_base : public ::testing::TestWithParam<std::string> {
|
||||
public:
|
||||
void SetUp() override {
|
||||
TestDataHelpers::generate_test_model();
|
||||
xml_file_name = TestDataHelpers::get_model_xml_file_name();
|
||||
bin_file_name = TestDataHelpers::get_model_bin_file_name();
|
||||
}
|
||||
|
||||
void TearDown() override {
|
||||
TestDataHelpers::release_test_model();
|
||||
}
|
||||
|
||||
public:
|
||||
std::string xml_file_name, bin_file_name;
|
||||
};
|
||||
|
||||
inline size_t find_device(ov_available_devices_t avai_devices, const char* device_name) {
|
||||
for (size_t i = 0; i < avai_devices.size; ++i) {
|
||||
if (strstr(avai_devices.devices[i], device_name))
|
||||
|
@ -1,51 +1,54 @@
|
||||
// Copyright (C) 2018-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <fstream>
|
||||
#include <random>
|
||||
|
||||
#include "ngraph_functions/builders.hpp"
|
||||
#include "ngraph_functions/subgraph_builders.hpp"
|
||||
#include "openvino/pass/manager.hpp"
|
||||
|
||||
namespace TestDataHelpers {
|
||||
|
||||
static const char kPathSeparator =
|
||||
#if defined _WIN32 || defined __CYGWIN__
|
||||
'\\';
|
||||
#else
|
||||
'/';
|
||||
#endif
|
||||
static const std::string model_bin_name = "test_model.bin";
|
||||
static const std::string model_xml_name = "test_model.xml";
|
||||
static const std::string model_exported_name = "test_exported_model.blob";
|
||||
|
||||
inline std::string getModelPathNonFatal() noexcept {
|
||||
if (const auto envVar = std::getenv("MODELS_PATH")) {
|
||||
return envVar;
|
||||
inline void generate_test_model() {
|
||||
ov::pass::Manager manager;
|
||||
manager.register_pass<ov::pass::Serialize>(model_xml_name, model_bin_name);
|
||||
auto function = ngraph::builder::subgraph::makeConvPoolReluNoReshapes({1, 3, 227, 227});
|
||||
manager.run_passes(function);
|
||||
}
|
||||
|
||||
inline std::string get_model_xml_file_name() {
|
||||
return model_xml_name;
|
||||
}
|
||||
|
||||
inline std::string get_model_bin_file_name() {
|
||||
return model_bin_name;
|
||||
}
|
||||
|
||||
inline std::string get_exported_blob_file_name() {
|
||||
return model_exported_name;
|
||||
}
|
||||
|
||||
inline void release_test_model() {
|
||||
std::remove(model_xml_name.c_str());
|
||||
std::remove(model_bin_name.c_str());
|
||||
}
|
||||
|
||||
inline void fill_random_input_nv12_data(uint8_t* data, const size_t w, const size_t h) {
|
||||
size_t size = w * h * 3 / 2;
|
||||
std::mt19937 gen(0);
|
||||
std::uniform_int_distribution<> distribution(0, 255);
|
||||
for (size_t i = 0; i < size; i++) {
|
||||
data[i] = static_cast<uint8_t>(distribution(gen));
|
||||
}
|
||||
|
||||
#ifdef MODELS_PATH
|
||||
return MODELS_PATH;
|
||||
#else
|
||||
return "";
|
||||
#endif
|
||||
}
|
||||
|
||||
inline std::string get_models_path() {
|
||||
return getModelPathNonFatal() + kPathSeparator + std::string("models");
|
||||
};
|
||||
|
||||
inline std::string get_data_path() {
|
||||
if (const auto envVar = std::getenv("DATA_PATH")) {
|
||||
return envVar;
|
||||
}
|
||||
|
||||
#ifdef DATA_PATH
|
||||
return DATA_PATH;
|
||||
#else
|
||||
return "";
|
||||
#endif
|
||||
}
|
||||
|
||||
inline std::string generate_model_path(std::string dir, std::string filename) {
|
||||
return get_models_path() + kPathSeparator + dir + kPathSeparator + filename;
|
||||
}
|
||||
|
||||
inline std::string generate_image_path(std::string dir, std::string filename) {
|
||||
return get_data_path() + kPathSeparator + "validation_set" + kPathSeparator + dir + kPathSeparator + filename;
|
||||
return;
|
||||
}
|
||||
|
||||
inline std::string generate_test_xml_file() {
|
||||
|
@ -0,0 +1,40 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "openvino/pass/pass.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingBinaryForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingBinaryBackward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingBinaryForward transformation sinks Transpose through BinaryElementwiseArithmetic,
|
||||
* BinaryElementwiseComparison, BinaryElementwiseLogical and PRelu operations in the forward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingBinaryForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingBinaryForward", "0");
|
||||
TransposeSinkingBinaryForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingBinaryBackward transformation sinks Transpose through BinaryElementwiseArithmetic,
|
||||
* BinaryElementwiseComparison, BinaryElementwiseLogical and PRelu operations in the backward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingBinaryBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingBinaryBackward", "0");
|
||||
TransposeSinkingBinaryBackward();
|
||||
};
|
@ -0,0 +1,40 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "openvino/pass/pass.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingConcatForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingConcatBackward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingConcatForward transformation sinks Transpose through Concat operation
|
||||
* in the forward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingConcatForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingConcatForward", "0");
|
||||
TransposeSinkingConcatForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingConcatBackward transformation sinks Transpose through Concat operation
|
||||
* in the backward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingConcatBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingConcatBackward", "0");
|
||||
TransposeSinkingConcatBackward();
|
||||
};
|
@ -0,0 +1,42 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "openvino/pass/pass.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingDataMovementForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingDataMovementBackward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingDataMovementForward transformation sinks Transpose through BatchToSpace, SpaceToBatch
|
||||
* and Pad operations in the forward direction.
|
||||
* These operations are categorized as "DataMovement" and are handled in a similar way in this transformation.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingDataMovementForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingDataMovementForward", "0");
|
||||
TransposeSinkingDataMovementForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingDataMovementBackward transformation sinks Transpose through BatchToSpace, SpaceToBatch
|
||||
* and Pad operations in the backward direction.
|
||||
* These operations are categorized as "DataMovement" and are handled in a similar way in this transformation.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingDataMovementBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingDataMovementBackward", "0");
|
||||
TransposeSinkingDataMovementBackward();
|
||||
};
|
@ -0,0 +1,52 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingGeneralForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingGeneralBackward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingGeneral;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingGeneralForward transformation combines all TransposeSinkingForward* transformations into
|
||||
* single GraphRewrite pass.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingGeneralForward : public ov::pass::GraphRewrite {
|
||||
public:
|
||||
OPENVINO_RTTI("TransposeSinkingGeneralForward", "0");
|
||||
TransposeSinkingGeneralForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingGeneralBackward transformation combines all TransposeSinkingBackward* transformations into
|
||||
* single GraphRewrite pass.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingGeneralBackward : public ov::pass::GraphRewrite {
|
||||
public:
|
||||
OPENVINO_RTTI("TransposeSinkingGeneralBackward", "0");
|
||||
TransposeSinkingGeneralBackward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingGeneral transformation combines TransposeSinkingGeneralForward and
|
||||
* TransposeSinkingGeneralBackward transformations into single ModelPass pass and inserts
|
||||
* ConstantFolding pass after them.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingGeneral : public ov::pass::ModelPass {
|
||||
public:
|
||||
OPENVINO_RTTI("TransposeSinkingGeneral", "0");
|
||||
bool run_on_model(const std::shared_ptr<ov::Model>& m) override;
|
||||
};
|
@ -0,0 +1,40 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "openvino/pass/pass.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingInterpolateForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingInterpolateBackward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingInterpolateForward transformation sinks Transpose through Interpolate operation
|
||||
* in the forward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingInterpolateForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingInterpolateForward", "0");
|
||||
TransposeSinkingInterpolateForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingInterpolateBackward transformation sinks Transpose through Interpolate operation
|
||||
* in the backward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingInterpolateBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingInterpolateBackward", "0");
|
||||
TransposeSinkingInterpolateBackward();
|
||||
};
|
@ -0,0 +1,40 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "openvino/pass/pass.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingReductionForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingReductionBackward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeReductionForward transformation sinks Transpose through Reduce, Squeeze, Unsqueeze operations
|
||||
* in the forward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingReductionForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingReductionForward", "0");
|
||||
TransposeSinkingReductionForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeReductionBackward transformation sinks Transpose through Reduce, Squeeze, Unsqueeze operations
|
||||
* in the backward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingReductionBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingReductionBackward", "0");
|
||||
TransposeSinkingReductionBackward();
|
||||
};
|
@ -0,0 +1,40 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "openvino/pass/pass.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingSplitBackward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingSplitForward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingSplitForward transformation sinks Transpose through Split, VariadicSplit operations
|
||||
* in the forward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingSplitForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingSplitForward", "0");
|
||||
TransposeSinkingSplitForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingSplitBackward transformation sinks Transpose through Split, VariadicSplit operations
|
||||
* in the backward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingSplitBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("ov::pass::TransposeSinkingSplitBackward", "0");
|
||||
TransposeSinkingSplitBackward();
|
||||
};
|
@ -0,0 +1,39 @@
|
||||
// Copyright (C) 2022-2023 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "openvino/pass/graph_rewrite.hpp"
|
||||
#include "transformations_visibility.hpp"
|
||||
|
||||
namespace ov {
|
||||
namespace pass {
|
||||
|
||||
class TRANSFORMATIONS_API TransposeSinkingUnaryForward;
|
||||
class TRANSFORMATIONS_API TransposeSinkingUnaryBackward;
|
||||
|
||||
} // namespace pass
|
||||
} // namespace ov
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingUnaryForward transformation sinks Transpose through UnaryElementwiseArithmetic, Clamp, Elu,
|
||||
* SoftPlus, LogicalNot, Convert, IsInf, IsNaN, IsFinite operations in the forward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingUnaryForward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("TransposeSinkingUnaryForward", "0");
|
||||
TransposeSinkingUnaryForward();
|
||||
};
|
||||
|
||||
/**
|
||||
* @ingroup ie_transformation_common_api
|
||||
* @brief TransposeSinkingUnaryBackward transformation sinks Transpose through UnaryElementwiseArithmetic, Clamp, Elu,
|
||||
* SoftPlus, LogicalNot, Convert, IsInf, IsNaN, IsFinite in the backward direction.
|
||||
*/
|
||||
class ov::pass::TransposeSinkingUnaryBackward : public ov::pass::MatcherPass {
|
||||
public:
|
||||
OPENVINO_RTTI("TransposeSinkingUnaryBackwardMultiConsumers", "0");
|
||||
TransposeSinkingUnaryBackward();
|
||||
};
|
@ -121,7 +121,7 @@ public:
|
||||
/// \param axis The axis along which the TopK operation should be executed
|
||||
/// \param mode Specifies whether TopK selects the largest or the smallest elements from each slice
|
||||
/// \param sort Specifies the order of corresponding elements of the output tensor
|
||||
/// \param index_element_type Specifies the data type type of of the elements in the 'indices' output tensor.
|
||||
/// \param index_element_type Specifies the data type of the elements in the 'indices' output tensor.
|
||||
/// \param stable Specifies whether the equivalent elements should maintain their relative order
|
||||
/// from the input tensor during sorting.
|
||||
TopK(const Output<Node>& data,
|
||||
@ -139,7 +139,7 @@ public:
|
||||
/// \param axis The axis along which the TopK operation should be executed
|
||||
/// \param mode Specifies whether TopK selects the largest or the smallest elements from each slice
|
||||
/// \param sort Specifies the order of corresponding elements of the output tensor
|
||||
/// \param index_element_type Specifies the data type type of of the elements in the 'indices' output tensor.
|
||||
/// \param index_element_type Specifies the data type of the elements in the 'indices' output tensor.
|
||||
/// \param stable Specifies whether the equivalent elements should maintain their relative order
|
||||
/// from the input tensor during sorting.
|
||||
TopK(const Output<Node>& data,
|
||||
@ -153,6 +153,11 @@ public:
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
std::shared_ptr<Node> clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
OPENVINO_SUPPRESS_DEPRECATED_START
|
||||
bool evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const override;
|
||||
OPENVINO_SUPPRESS_DEPRECATED_END
|
||||
bool has_evaluate() const override;
|
||||
|
||||
bool get_stable() const {
|
||||
return m_stable;
|
||||
}
|
||||
|
@ -14,9 +14,8 @@
|
||||
namespace ngraph {
|
||||
namespace runtime {
|
||||
namespace reference {
|
||||
// Had to split out these two functions. They used to be lambda expressions but
|
||||
// MSVC had difficulty compiling. This way is more explicit.
|
||||
template <typename T, typename U>
|
||||
// This used to be lambda expressions but MSVC had difficulty compiling it. This way is more explicit.
|
||||
template <bool D, typename T, typename U>
|
||||
inline bool compare_max(const std::tuple<T, U>& a, const std::tuple<T, U>& b) {
|
||||
// this is intentional to be able to compare floats directly
|
||||
// without using relative or absolute tolerance
|
||||
@ -30,19 +29,19 @@ inline bool compare_max(const std::tuple<T, U>& a, const std::tuple<T, U>& b) {
|
||||
#if defined(__GNUC__)
|
||||
# pragma GCC diagnostic pop
|
||||
#endif
|
||||
return a > b;
|
||||
|
||||
if (D)
|
||||
return std::get<0>(a) > std::get<0>(b);
|
||||
else
|
||||
return std::get<0>(a) < std::get<0>(b);
|
||||
}
|
||||
|
||||
template <typename T, typename U>
|
||||
inline bool compare_min(const std::tuple<T, U>& a, const std::tuple<T, U>& b) {
|
||||
return a < b;
|
||||
}
|
||||
|
||||
template <typename T, typename U>
|
||||
inline bool sort_indices_ascending(const std::tuple<T, U>& a, const std::tuple<T, U>& b) {
|
||||
inline bool compare_indices_ascending(const std::tuple<T, U>& a, const std::tuple<T, U>& b) {
|
||||
return std::get<1>(a) < std::get<1>(b);
|
||||
}
|
||||
|
||||
// TopK reference implementation provides stable indices output
|
||||
template <typename T, typename U>
|
||||
void topk(const T* arg,
|
||||
U* out_indices,
|
||||
@ -52,7 +51,7 @@ void topk(const T* arg,
|
||||
size_t axis,
|
||||
size_t k,
|
||||
bool compute_max,
|
||||
op::v1::TopK::SortType sort = op::v1::TopK::SortType::NONE) {
|
||||
op::TopKSortType sort = op::TopKSortType::NONE) {
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
// reorder source axis visit order and make "axis" inner most
|
||||
@ -87,25 +86,25 @@ void topk(const T* arg,
|
||||
}
|
||||
// Sort the temp vector
|
||||
if (compute_max) {
|
||||
nth_element(workspace.begin(), workspace.begin() + k, workspace.end(), compare_max<T, U>);
|
||||
nth_element(workspace.begin(), workspace.begin() + k, workspace.end(), compare_max<true, T, U>);
|
||||
} else {
|
||||
nth_element(workspace.begin(), workspace.begin() + k, workspace.end(), compare_min<T, U>);
|
||||
nth_element(workspace.begin(), workspace.begin() + k, workspace.end(), compare_max<false, T, U>);
|
||||
}
|
||||
// Write temp vector to output
|
||||
switch (sort) {
|
||||
case op::v1::TopK::SortType::NONE:
|
||||
case op::TopKSortType::NONE:
|
||||
break;
|
||||
case op::v1::TopK::SortType::SORT_INDICES:
|
||||
std::sort(workspace.begin(), workspace.begin() + k, sort_indices_ascending<T, U>);
|
||||
case op::TopKSortType::SORT_INDICES:
|
||||
std::sort(workspace.begin(), workspace.begin() + k, compare_indices_ascending<T, U>);
|
||||
break;
|
||||
case op::v1::TopK::SortType::SORT_VALUES:
|
||||
case op::TopKSortType::SORT_VALUES:
|
||||
if (compute_max)
|
||||
std::sort(workspace.begin(), workspace.begin() + k, compare_max<T, U>);
|
||||
std::sort(workspace.begin(), workspace.begin() + k, compare_max<true, T, U>);
|
||||
else
|
||||
std::sort(workspace.begin(), workspace.begin() + k, compare_min<T, U>);
|
||||
std::sort(workspace.begin(), workspace.begin() + k, compare_max<false, T, U>);
|
||||
}
|
||||
for (size_t j = 0; j < k; j++) {
|
||||
tuple<T, U> entry = workspace[j];
|
||||
const auto& entry = workspace[j];
|
||||
out_values[out_index] = get<0>(entry);
|
||||
out_indices[out_index] = get<1>(entry);
|
||||
out_index += out_axis_stride;
|
||||
|
@ -103,6 +103,37 @@ bool evaluate_topk(const HostTensorPtr& arg,
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
bool TopK_evaluate(const ov::op::util::TopKBase* const node,
|
||||
const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) {
|
||||
const auto& arg_shape = inputs[0]->get_shape();
|
||||
const auto axis = normalize_axis(node, node->get_provided_axis(), arg_shape.size());
|
||||
const auto compute_max = node->get_mode() == ov::op::TopKMode::MAX;
|
||||
const auto sort_type = node->get_sort_type();
|
||||
|
||||
const auto input_shapes = vector<PartialShape>{inputs[0]->get_partial_shape(), inputs[1]->get_partial_shape()};
|
||||
const auto constant_data = map<size_t, HostTensorPtr>{{1, inputs[1]}};
|
||||
auto output_shape = shape_infer(node, input_shapes, constant_data).front().to_shape();
|
||||
|
||||
if (output_shape[axis] == 0) {
|
||||
// the kernel can't handle K (output_shape[axis]) equal 0, use arg_shape[axis] instead.
|
||||
output_shape[axis] = arg_shape[axis];
|
||||
}
|
||||
|
||||
const size_t k = output_shape[axis];
|
||||
OPENVINO_ASSERT(k <= arg_shape[axis], "'K' exceeds the dimension of top_k_axis");
|
||||
|
||||
// TopK reference implementation provides stable indices output so this parameter is not passed on
|
||||
return evaluate_topk(inputs[0],
|
||||
outputs[1],
|
||||
outputs[0],
|
||||
output_shape,
|
||||
axis,
|
||||
k,
|
||||
compute_max,
|
||||
sort_type,
|
||||
node->get_index_element_type());
|
||||
}
|
||||
} // namespace
|
||||
} // namespace topk
|
||||
|
||||
@ -145,34 +176,7 @@ shared_ptr<Node> op::v1::TopK::clone_with_new_inputs(const OutputVector& new_arg
|
||||
|
||||
bool op::v1::TopK::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const {
|
||||
OV_OP_SCOPE(v1_TopK_evaluate);
|
||||
const auto& arg_shape = inputs[0]->get_shape();
|
||||
// 1. get axis, mode (max/min), sort_type
|
||||
auto axis = ngraph::normalize_axis(this, m_axis, arg_shape.size());
|
||||
auto compute_max = get_mode() == TopKMode::MAX;
|
||||
auto sort_type = get_sort_type();
|
||||
|
||||
const auto input_shapes = std::vector<PartialShape>{inputs[0]->get_partial_shape(), inputs[1]->get_partial_shape()};
|
||||
const auto constant_data = std::map<size_t, HostTensorPtr>{{1, inputs[1]}};
|
||||
auto output_shape = shape_infer(this, input_shapes, constant_data).front().to_shape();
|
||||
|
||||
if (output_shape[axis] == 0) {
|
||||
// the kernel can't handle K (output_shape[axis]) equal 0, use arg_shape[axis] instead.
|
||||
output_shape[axis] = arg_shape[axis];
|
||||
}
|
||||
|
||||
// 2. get value of k
|
||||
size_t k = output_shape[axis];
|
||||
OPENVINO_ASSERT(k <= arg_shape[axis], "'K' exceeds the dimension of top_k_axis");
|
||||
|
||||
return topk::evaluate_topk(inputs[0],
|
||||
outputs[1],
|
||||
outputs[0],
|
||||
output_shape,
|
||||
axis,
|
||||
k,
|
||||
compute_max,
|
||||
sort_type,
|
||||
get_index_element_type());
|
||||
return topk::TopK_evaluate(this, outputs, inputs);
|
||||
}
|
||||
|
||||
bool op::v1::TopK::has_evaluate() const {
|
||||
@ -245,34 +249,7 @@ shared_ptr<Node> op::v3::TopK::clone_with_new_inputs(const OutputVector& new_arg
|
||||
|
||||
bool op::v3::TopK::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const {
|
||||
OV_OP_SCOPE(v3_TopK_evaluate);
|
||||
const auto& arg_shape = inputs[0]->get_shape();
|
||||
// 1. get axis, mode (max/min), sort_type
|
||||
auto axis = ngraph::normalize_axis(this, m_axis, arg_shape.size());
|
||||
auto compute_max = get_mode() == TopKMode::MAX;
|
||||
auto sort_type = get_sort_type();
|
||||
|
||||
const auto input_shapes = std::vector<PartialShape>{inputs[0]->get_partial_shape(), inputs[1]->get_partial_shape()};
|
||||
const auto constant_data = std::map<size_t, HostTensorPtr>{{1, inputs[1]}};
|
||||
auto output_shape = shape_infer(this, input_shapes, constant_data).front().to_shape();
|
||||
|
||||
if (output_shape[axis] == 0) {
|
||||
// the kernel can't handle K (output_shape[axis]) equal 0, use arg_shape[axis] instead.
|
||||
output_shape[axis] = arg_shape[axis];
|
||||
}
|
||||
|
||||
// 2. get value of k
|
||||
size_t k = output_shape[axis];
|
||||
OPENVINO_ASSERT(k <= arg_shape[axis], "'K' exceeds the dimension of top_k_axis");
|
||||
|
||||
return topk::evaluate_topk(inputs[0],
|
||||
outputs[1],
|
||||
outputs[0],
|
||||
output_shape,
|
||||
axis,
|
||||
k,
|
||||
compute_max,
|
||||
sort_type,
|
||||
get_index_element_type());
|
||||
return topk::TopK_evaluate(this, outputs, inputs);
|
||||
}
|
||||
|
||||
bool op::v3::TopK::has_evaluate() const {
|
||||
@ -372,3 +349,25 @@ std::shared_ptr<Node> ov::op::v11::TopK::clone_with_new_inputs(const OutputVecto
|
||||
m_index_element_type,
|
||||
m_stable);
|
||||
}
|
||||
|
||||
bool ov::op::v11::TopK::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const {
|
||||
OV_OP_SCOPE(v11_TopK_evaluate);
|
||||
return topk::TopK_evaluate(this, outputs, inputs);
|
||||
}
|
||||
|
||||
bool ov::op::v11::TopK::has_evaluate() const {
|
||||
OV_OP_SCOPE(v11_TopK_has_evaluate);
|
||||
|
||||
switch (get_input_element_type(0)) {
|
||||
case ngraph::element::i32:
|
||||
case ngraph::element::i64:
|
||||
case ngraph::element::u32:
|
||||
case ngraph::element::u64:
|
||||
case ngraph::element::f16:
|
||||
case ngraph::element::f32:
|
||||
break;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
@ -1824,13 +1824,18 @@ void layout_optimizer::select_preferred_formats_for_onednn(program_node& node, d
|
||||
for (size_t idx = 0 ; idx < node.get_dependencies().size() ; idx++) {
|
||||
if (node.get_dependency(idx).is_constant())
|
||||
continue;
|
||||
node.set_preferred_input_fmt(idx, cldnn::format::bfyx);
|
||||
|
||||
size_t out_rank = node.get_output_layout().get_rank();
|
||||
auto target_format = format::get_default_format(out_rank);
|
||||
|
||||
node.set_preferred_input_fmt(idx, target_format);
|
||||
|
||||
if (node.get_preferred_output_fmt() == format::any) {
|
||||
for (size_t usr = 0; usr < std::max<size_t>(1, node.get_users().size()); usr++)
|
||||
node.set_preferred_output_fmt(usr, cldnn::format::bfyx);
|
||||
for (size_t usr = 0; usr < std::max<size_t>(1, node.get_users().size()); usr++) {
|
||||
node.set_preferred_output_fmt(usr, target_format);
|
||||
}
|
||||
}
|
||||
GPU_DEBUG_LOG << "select_preferred_formats:" << node.id() << ": " << fmt_to_str(cldnn::format::bfyx) << " --> " << fmt_to_str(cldnn::format::bfyx)
|
||||
GPU_DEBUG_LOG << "select_preferred_formats:" << node.id() << ": " << fmt_to_str(target_format) << " --> " << fmt_to_str(target_format)
|
||||
<< " For index : " << idx << std::endl;
|
||||
}
|
||||
}
|
||||
|
@ -1350,6 +1350,138 @@ public:
|
||||
};
|
||||
|
||||
#ifdef ENABLE_ONEDNN_FOR_GPU
|
||||
struct gemm_onednn_test_params {
|
||||
std::vector<tensor> in_shapes;
|
||||
tensor out_shape;
|
||||
tensor kernel;
|
||||
tensor pad;
|
||||
data_types data_type_in0;
|
||||
data_types data_type_in1;
|
||||
data_types data_type_in2;
|
||||
format input_format;
|
||||
data_types default_type;
|
||||
format default_format;
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
class GemmOneDNNTest : public ::testing::TestWithParam<T> {
|
||||
public:
|
||||
cldnn::engine& engine = get_test_engine();
|
||||
topology topology_ocl;
|
||||
topology topology_onednn;
|
||||
|
||||
ExecutionConfig config_ocl;
|
||||
ExecutionConfig config_onednn;
|
||||
|
||||
float tolerance = 0.0f;
|
||||
|
||||
void SetUp() override {
|
||||
config_ocl.set_property(ov::intel_gpu::optimize_data(true));
|
||||
config_ocl.set_property(ov::intel_gpu::queue_type(QueueTypes::in_order));
|
||||
if (engine.get_device_info().supports_immad) {
|
||||
config_onednn.set_property(ov::intel_gpu::optimize_data(true));
|
||||
config_onednn.set_property(ov::intel_gpu::queue_type(QueueTypes::in_order));
|
||||
}
|
||||
}
|
||||
|
||||
void execute(T& p) {
|
||||
auto input0_prim = get_generated_random_1d_mem(engine, get_input_layout(p, 0));
|
||||
auto input1_prim = get_generated_random_1d_mem(engine, get_input_layout(p, 1));
|
||||
|
||||
network network_ocl(engine, topology_ocl, config_ocl);
|
||||
network network_onednn(engine, topology_onednn, config_onednn);
|
||||
|
||||
network_ocl.set_input_data("input0", input0_prim);
|
||||
network_ocl.set_input_data("input1", input1_prim);
|
||||
network_onednn.set_input_data("input0", input0_prim);
|
||||
network_onednn.set_input_data("input1", input1_prim);
|
||||
|
||||
compare(network_ocl, network_onednn, p);
|
||||
}
|
||||
|
||||
void compare(network& network_ocl, network& network_onednn, T& p) {
|
||||
auto outputs_ocl = network_ocl.execute();
|
||||
auto outputs_onednn = network_onednn.execute();
|
||||
|
||||
ASSERT_EQ(outputs_ocl.size(), outputs_onednn.size());
|
||||
ASSERT_EQ(outputs_ocl.size(), size_t(1));
|
||||
|
||||
auto val_ocl = get_output_values_to_float(network_ocl, outputs_ocl.begin()->first);
|
||||
auto val_onednn = get_output_values_to_float(network_onednn, outputs_onednn.begin()->first);
|
||||
|
||||
ASSERT_EQ(val_ocl.size(), val_onednn.size());
|
||||
|
||||
for (size_t i = 0; i < val_ocl.size(); i++) {
|
||||
ASSERT_NEAR(val_ocl[i], val_onednn[i], tolerance)
|
||||
<< "tolerance = " << tolerance
|
||||
<< "\ni = " << i
|
||||
<< "\nocl[i] = " << val_ocl[i]
|
||||
<< "\nonednn[i] = " << val_onednn[i];
|
||||
}
|
||||
}
|
||||
|
||||
layout get_input_layout(T& p, int in_no) {
|
||||
auto pad = p.pad;
|
||||
std::vector<int> pad_ = { 0, 0, pad.spatial[0], pad.spatial[1] };
|
||||
if (in_no == 0)
|
||||
return layout{ p.data_type_in0, p.input_format, p.in_shapes.at(0), padding{ pad_ } };
|
||||
else if (in_no == 1)
|
||||
return layout{ p.data_type_in1, p.input_format, p.in_shapes.at(1), padding{ pad_ } };
|
||||
else
|
||||
return layout{ p.data_type_in2, p.input_format, p.in_shapes.at(2), padding{ pad_ } };
|
||||
}
|
||||
};
|
||||
|
||||
class gemm_onednn_ndims : public GemmOneDNNTest<gemm_onednn_test_params> {};
|
||||
TEST_P(gemm_onednn_ndims, basic) {
|
||||
if (!engine.get_device_info().supports_immad)
|
||||
return;
|
||||
|
||||
auto p = GetParam();
|
||||
|
||||
auto in_layout0 = get_input_layout(p, 0);
|
||||
auto in_layout1 = get_input_layout(p, 1);
|
||||
|
||||
topology_ocl.add(input_layout("input0", in_layout0));
|
||||
topology_ocl.add(input_layout("input1", in_layout1));
|
||||
topology_ocl.add(gemm("gemm0_ocl", { input_info("input0"), input_info("input1") }, data_types::f32, false, false, 1.f, 0.f, in_layout0.get_rank(), in_layout1.get_rank()));
|
||||
topology_ocl.add(reorder("reorder0", input_info("gemm0_ocl"), p.default_format, data_types::f32));
|
||||
|
||||
topology_onednn.add(input_layout("input0", get_input_layout(p, 0)));
|
||||
topology_onednn.add(input_layout("input1", get_input_layout(p, 1)));
|
||||
topology_onednn.add(gemm("gemm0_onednn", { input_info("input0"), input_info("input1") }, data_types::f32, false, false, 1.f, 0.f, in_layout0.get_rank(), in_layout1.get_rank()));
|
||||
topology_onednn.add(reorder("reorder0", input_info("gemm0_onednn"), p.default_format, data_types::f32));
|
||||
|
||||
ov::intel_gpu::ImplementationDesc gemm_impl_ocl = { p.default_format, "", impl_types::ocl };
|
||||
config_ocl.set_property(ov::intel_gpu::force_implementations(ov::intel_gpu::ImplForcingMap{ { "gemm0_ocl", gemm_impl_ocl } }));
|
||||
|
||||
ov::intel_gpu::ImplementationDesc gemm_impl_onednn = { p.default_format, "", impl_types::onednn };
|
||||
config_onednn.set_property(ov::intel_gpu::force_implementations(ov::intel_gpu::ImplForcingMap{ { "gemm0_onednn", gemm_impl_onednn } }));
|
||||
|
||||
tolerance = default_tolerance(p.default_type);
|
||||
execute(p);
|
||||
}
|
||||
#define CASE_GEMM_ONEDNN_FP16_4D { { 2, 3, 2, 2 }, { 2, 3, 2, 2 } }, { 2, 3, 2, 2 }, tensor{ 1 }, tensor{ 0 }, \
|
||||
data_types::f16, data_types::f16, data_types::f16, format::bfyx, data_types::f16, format::bfyx
|
||||
#define CASE_GEMM_ONEDNN_FP16_5D { { 1, 3, 4, 4, 4 }, { 1, 3, 4, 4, 4 } }, { 1, 3, 4, 4, 4 }, tensor{ 1 }, tensor{ 0 }, \
|
||||
data_types::f16, data_types::f16, data_types::f16, format::bfzyx, data_types::f16, format::bfzyx
|
||||
#define CASE_GEMM_ONEDNN_FP16_6D { { 2, 3, 5, 4, 3, 2 }, { 2, 3, 4, 5, 3, 2 } }, { 2, 3, 5, 5, 3, 2 }, tensor{ 1 }, tensor{ 0 }, \
|
||||
data_types::f16, data_types::f16, data_types::f16, format::bfwzyx, data_types::f16, format::bfwzyx
|
||||
#define CASE_GEMM_ONEDNN_I8_4D { { 2, 3, 2, 2 }, { 2, 3, 2, 2 } }, { 2, 3, 2, 2 }, tensor{ 1 }, tensor{ 0 }, \
|
||||
data_types::i8, data_types::i8, data_types::i8, format::bfyx, data_types::i8, format::bfyx
|
||||
#define CASE_GEMM_ONEDNN_I8_5D { { 1, 3, 4, 4, 4 }, { 1, 3, 4, 4, 4 } }, { 1, 3, 4, 4, 4 }, tensor{ 1 }, tensor{ 0 }, \
|
||||
data_types::i8, data_types::i8, data_types::i8, format::bfzyx, data_types::i8, format::bfzyx
|
||||
#define CASE_GEMM_ONEDNN_I8_6D { { 2, 3, 5, 4, 3, 2 }, { 2, 3, 4, 5, 3, 2 } }, { 2, 3, 5, 5, 3, 2 }, tensor{ 1 }, tensor{ 0 }, \
|
||||
data_types::i8, data_types::i8, data_types::i8, format::bfwzyx, data_types::i8, format::bfwzyx
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(gemm_gpu, gemm_onednn_ndims, ::testing::ValuesIn(std::vector<gemm_onednn_test_params>{
|
||||
gemm_onednn_test_params{ CASE_GEMM_ONEDNN_FP16_4D },
|
||||
gemm_onednn_test_params{ CASE_GEMM_ONEDNN_FP16_5D },
|
||||
gemm_onednn_test_params{ CASE_GEMM_ONEDNN_FP16_6D },
|
||||
gemm_onednn_test_params{ CASE_GEMM_ONEDNN_I8_4D },
|
||||
gemm_onednn_test_params{ CASE_GEMM_ONEDNN_I8_5D },
|
||||
gemm_onednn_test_params{ CASE_GEMM_ONEDNN_I8_6D },
|
||||
}));
|
||||
|
||||
class gemm_int8_simple_tests_onednn : public ::GemmBaseTest<gemm_base_test_params, int8_t, int8_t, float, float, int32_t> {};
|
||||
TEST_P(gemm_int8_simple_tests_onednn, basic) { auto p = GetParam(); execute(p); }
|
||||
|
@ -609,6 +609,23 @@ inline std::vector<float> get_output_values_to_float(cldnn::network& net, const
|
||||
}
|
||||
}
|
||||
|
||||
inline cldnn::memory::ptr get_generated_random_1d_mem(cldnn::engine& engine, cldnn::layout l) {
|
||||
auto prim = engine.allocate_memory(l);
|
||||
cldnn::tensor s = l.get_tensor();
|
||||
if (l.data_type == cldnn::data_types::i8 || l.data_type == cldnn::data_types::u8) {
|
||||
VF<uint8_t> rnd_vec = generate_random_1d<uint8_t>(s.count(), -200, 200);
|
||||
set_values(prim, rnd_vec);
|
||||
} else if (l.data_type == cldnn::data_types::f16) {
|
||||
VF<FLOAT16> rnd_vec = generate_random_1d<FLOAT16>(s.count(), -1, 1);
|
||||
set_values(prim, rnd_vec);
|
||||
} else {
|
||||
VF<float> rnd_vec = generate_random_1d<float>(s.count(), -1, 1);
|
||||
set_values(prim, rnd_vec);
|
||||
}
|
||||
|
||||
return prim;
|
||||
}
|
||||
|
||||
double default_tolerance(cldnn::data_types dt);
|
||||
// inline void print_bin_blob(cldnn::memory& mem, std::string name)
|
||||
// {
|
||||
|
@ -4,9 +4,10 @@
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include "openvino/opsets/opset3.hpp"
|
||||
#include "openvino/opsets/opset1.hpp"
|
||||
#include "base_reference_test.hpp"
|
||||
#include "openvino/opsets/opset1.hpp"
|
||||
#include "openvino/opsets/opset11.hpp"
|
||||
#include "openvino/opsets/opset3.hpp"
|
||||
|
||||
using namespace reference_tests;
|
||||
using namespace ov;
|
||||
@ -36,7 +37,7 @@ struct TopKParams {
|
||||
class ReferenceTopKTest : public testing::TestWithParam<TopKParams>, public CommonReferenceTest {
|
||||
public:
|
||||
static std::string getTestCaseName(const testing::TestParamInfo<TopKParams>& obj) {
|
||||
auto param = obj.param;
|
||||
const auto& param = obj.param;
|
||||
std::ostringstream result;
|
||||
result << "aType=" << param.A.type;
|
||||
result << "_aShape=" << param.A.shape;
|
||||
@ -74,7 +75,7 @@ struct TopKParamsResnet50 {
|
||||
class ReferenceTopKTestResnet50 : public testing::TestWithParam<TopKParamsResnet50>, public CommonReferenceTest {
|
||||
public:
|
||||
void SetUp() override {
|
||||
auto params = GetParam();
|
||||
const auto& params = GetParam();
|
||||
function = CreateFunction(params);
|
||||
inputData = {params.A.data};
|
||||
refOutData = {params.result5Value.data, params.result5Index.data,
|
||||
@ -82,7 +83,7 @@ public:
|
||||
}
|
||||
|
||||
static std::string getTestCaseName(const testing::TestParamInfo<TopKParamsResnet50>& obj) {
|
||||
auto param = obj.param;
|
||||
const auto& param = obj.param;
|
||||
std::ostringstream result;
|
||||
result << "aType=" << param.A.type;
|
||||
result << "_aShape=" << param.A.shape;
|
||||
@ -211,7 +212,7 @@ INSTANTIATE_TEST_SUITE_P(smoke_TopK_With_Hardcoded_Refs, ReferenceTopKTestResnet
|
||||
class ReferenceTopKTestMaxMinSort : public ReferenceTopKTest {
|
||||
public:
|
||||
void SetUp() override {
|
||||
auto params = GetParam();
|
||||
const auto& params = GetParam();
|
||||
function = CreateFunction(params);
|
||||
inputData = {params.A.data};
|
||||
refOutData = {params.result0.data, params.result1.data};
|
||||
@ -538,7 +539,7 @@ INSTANTIATE_TEST_SUITE_P(smoke_TopK_With_Hardcoded_Refs, ReferenceTopKTestMaxMin
|
||||
class ReferenceTopKTestBackend : public ReferenceTopKTest {
|
||||
public:
|
||||
void SetUp() override {
|
||||
auto params = GetParam();
|
||||
const auto& params = GetParam();
|
||||
function = CreateFunction(params);
|
||||
inputData = {params.A.data};
|
||||
refOutData = {params.result0.data, params.result1.data};
|
||||
@ -561,59 +562,6 @@ TEST_P(ReferenceTopKTestBackend, CompareWithRefs) {
|
||||
Exec();
|
||||
}
|
||||
|
||||
template <element::Type_t ET, element::Type_t ET2, element::Type_t ET_OUT>
|
||||
std::vector<TopKParams> generateParamsV3() {
|
||||
using T = typename element_type_traits<ET>::value_type;
|
||||
using T2 = typename element_type_traits<ET2>::value_type;
|
||||
using T_OUT = typename element_type_traits<ET_OUT>::value_type;
|
||||
std::vector<TopKParams> params {
|
||||
TopKParams(
|
||||
reference_tests::Tensor(ET, {5}, std::vector<T>{3, 1, 2, 5, 4}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{3}),
|
||||
0,
|
||||
opset1::TopK::Mode::MAX,
|
||||
opset1::TopK::SortType::SORT_VALUES,
|
||||
reference_tests::Tensor(ET, {3}, std::vector<T>{5, 4, 3}),
|
||||
reference_tests::Tensor(ET_OUT, {3}, std::vector<T_OUT>{3, 4, 0}),
|
||||
0,
|
||||
"topk_mode_sort_order"),
|
||||
|
||||
TopKParams(
|
||||
reference_tests::Tensor(ET, {5}, std::vector<T>{3, 1, 2, 5, 4}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{3}),
|
||||
0,
|
||||
opset1::TopK::Mode::MAX,
|
||||
opset1::TopK::SortType::SORT_INDICES,
|
||||
reference_tests::Tensor(ET, {3}, std::vector<T>{3, 5, 4}),
|
||||
reference_tests::Tensor(ET_OUT, {3}, std::vector<T_OUT>{0, 3, 4}),
|
||||
0,
|
||||
"topk_mode_sort_order_1"),
|
||||
|
||||
TopKParams(
|
||||
reference_tests::Tensor(ET, {5}, std::vector<T>{3, 1, 2, 5, 4}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{3}),
|
||||
0,
|
||||
opset1::TopK::Mode::MIN,
|
||||
opset1::TopK::SortType::SORT_VALUES,
|
||||
reference_tests::Tensor(ET, {3}, std::vector<T>{1, 2, 3}),
|
||||
reference_tests::Tensor(ET_OUT, {3}, std::vector<T_OUT>{1, 2, 0}),
|
||||
0,
|
||||
"topk_mode_sort_order_2"),
|
||||
|
||||
TopKParams(
|
||||
reference_tests::Tensor(ET, {5}, std::vector<T>{3, 1, 2, 5, 4}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{3}),
|
||||
0,
|
||||
opset1::TopK::Mode::MIN,
|
||||
opset1::TopK::SortType::SORT_INDICES,
|
||||
reference_tests::Tensor(ET, {3}, std::vector<T>{3, 1, 2}),
|
||||
reference_tests::Tensor(ET_OUT, {3}, std::vector<T_OUT>{0, 1, 2}),
|
||||
0,
|
||||
"topk_mode_sort_order_3"),
|
||||
};
|
||||
return params;
|
||||
}
|
||||
|
||||
std::vector<TopKParams> generateCombinedParamsBackend() {
|
||||
const std::vector<std::vector<TopKParams>> generatedParams {
|
||||
generateParamsMaxMinSort<element::Type_t::i8, element::Type_t::i64, element::Type_t::i32>(),
|
||||
@ -643,7 +591,7 @@ INSTANTIATE_TEST_SUITE_P(smoke_TopK_With_Hardcoded_Refs, ReferenceTopKTestBacken
|
||||
class ReferenceTopKTest1dMaxMin : public ReferenceTopKTest {
|
||||
public:
|
||||
void SetUp() override {
|
||||
auto params = GetParam();
|
||||
const auto& params = GetParam();
|
||||
function = CreateFunction(params, params.outIdx);
|
||||
inputData = {params.A.data};
|
||||
if (params.outIdx != 0) {
|
||||
@ -654,7 +602,7 @@ public:
|
||||
}
|
||||
|
||||
static std::string getTestCaseName(const testing::TestParamInfo<TopKParams>& obj) {
|
||||
auto param = obj.param;
|
||||
const auto& param = obj.param;
|
||||
std::ostringstream result;
|
||||
result << "aType=" << param.A.type;
|
||||
result << "_aShape=" << param.A.shape;
|
||||
@ -1459,7 +1407,7 @@ INSTANTIATE_TEST_SUITE_P(smoke_TopK_With_Hardcoded_Refs, ReferenceTopKTestInt64,
|
||||
class ReferenceTopKTestSingleOutput : public ReferenceTopKTest {
|
||||
public:
|
||||
void SetUp() override {
|
||||
auto params = GetParam();
|
||||
const auto& params = GetParam();
|
||||
function = CreateFunction(params);
|
||||
inputData = {params.A.data};
|
||||
refOutData = {params.result1.data};
|
||||
@ -1706,4 +1654,103 @@ TEST(ReferenceTopKTestInvalidV3, topk_v3_invalid_k) {
|
||||
const auto k_negative = opset1::Constant::create(element::i8, Shape{}, {-1});
|
||||
EXPECT_THROW(opset3::TopK(data, k_negative, 0, "max", "index"), ngraph::NodeValidationFailure);
|
||||
}
|
||||
|
||||
class ReferenceTopKv11StableTest : public ReferenceTopKTest {
|
||||
public:
|
||||
void SetUp() override {
|
||||
const auto& params = GetParam();
|
||||
function = CreateFunction(params);
|
||||
inputData = {params.A.data};
|
||||
refOutData = {
|
||||
params.result0.data, // stable output values
|
||||
params.result1.data, // stable output indices
|
||||
params.result0.data // unstable output values
|
||||
// unstable output indices need not be compared, by definition these might differ for
|
||||
// equal data values
|
||||
};
|
||||
}
|
||||
|
||||
private:
|
||||
static std::shared_ptr<Model> CreateFunction(const TopKParams& params) {
|
||||
const auto A = std::make_shared<opset11::Parameter>(params.A.type, params.A.shape);
|
||||
const auto k = opset11::Constant::create(params.k.type, params.k.shape, params.k.data.data());
|
||||
const auto topk_stable =
|
||||
std::make_shared<opset11::TopK>(A, k, params.axis, params.mode, params.sort, params.result1.type, true);
|
||||
const auto topk_unstable =
|
||||
std::make_shared<opset11::TopK>(A, k, params.axis, params.mode, params.sort, params.result1.type, false);
|
||||
|
||||
return std::make_shared<Model>(
|
||||
OutputVector{topk_stable->output(0), topk_stable->output(1), topk_unstable->output(0)},
|
||||
ParameterVector{A});
|
||||
}
|
||||
};
|
||||
|
||||
TEST_P(ReferenceTopKv11StableTest, CompareWithRefs) {
|
||||
Exec();
|
||||
}
|
||||
|
||||
template <element::Type_t ET, element::Type_t ET2, element::Type_t ET_OUT>
|
||||
std::vector<TopKParams> generateParamsForStableTest() {
|
||||
using T = typename element_type_traits<ET>::value_type;
|
||||
using T2 = typename element_type_traits<ET2>::value_type;
|
||||
using T_OUT = typename element_type_traits<ET_OUT>::value_type;
|
||||
std::vector<TopKParams> params{
|
||||
TopKParams(reference_tests::Tensor(ET, {2, 7}, std::vector<T>{5, 4, 3, 1, 7, 1, 3, 2, 1, 2, 5, 1, 7, 3}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{3}),
|
||||
1,
|
||||
opset1::TopK::Mode::MIN,
|
||||
opset1::TopK::SortType::SORT_VALUES,
|
||||
reference_tests::Tensor(ET, {2, 3}, std::vector<T>{1, 1, 3, 1, 1, 2}),
|
||||
reference_tests::Tensor(ET_OUT, {2, 3}, std::vector<T_OUT>{3, 5, 2, 1, 4, 0}),
|
||||
0,
|
||||
"repeated_values"),
|
||||
TopKParams(reference_tests::Tensor(ET,
|
||||
{7, 3},
|
||||
std::vector<T>{
|
||||
5, 7, 1, 7, 9, 1, 5, 7, 2, 2, 8, 2, 7, 7, 5, 8, 1, 4, 2, 2, 3,
|
||||
}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{4}),
|
||||
0,
|
||||
opset1::TopK::Mode::MAX,
|
||||
opset1::TopK::SortType::SORT_VALUES,
|
||||
reference_tests::Tensor(ET, {4, 3}, std::vector<T>{8, 9, 5, 7, 8, 4, 7, 7, 3, 5, 7, 2}),
|
||||
reference_tests::Tensor(ET_OUT, {4, 3}, std::vector<T_OUT>{5, 1, 4, 1, 3, 5, 4, 0, 6, 0, 2, 2}),
|
||||
0,
|
||||
"repeated_values"),
|
||||
TopKParams(reference_tests::Tensor(ET,
|
||||
{2, 3, 3},
|
||||
std::vector<T>{1, 3, 3, 1, 2, 4, 2, 2, 3, 7, 7, 1, 7, 9, 7, 5, 7, 7}),
|
||||
reference_tests::Tensor(ET2, {}, std::vector<T2>{2}),
|
||||
1,
|
||||
opset1::TopK::Mode::MIN,
|
||||
opset1::TopK::SortType::SORT_VALUES,
|
||||
reference_tests::Tensor(ET, {2, 2, 3}, std::vector<T>{1, 2, 3, 1, 2, 3, 5, 7, 1, 7, 7, 7}),
|
||||
reference_tests::Tensor(ET_OUT, {2, 2, 3}, std::vector<T_OUT>{0, 1, 0, 1, 2, 2, 2, 0, 0, 0, 2, 1}),
|
||||
0,
|
||||
"repeated_values"),
|
||||
};
|
||||
return params;
|
||||
}
|
||||
|
||||
std::vector<TopKParams> generateCombinedParamsForStableTest() {
|
||||
std::vector<std::vector<TopKParams>> generatedParams{
|
||||
generateParamsForStableTest<element::Type_t::i32, element::Type_t::i32, element::Type_t::i32>(),
|
||||
generateParamsForStableTest<element::Type_t::i64, element::Type_t::i64, element::Type_t::i64>(),
|
||||
generateParamsForStableTest<element::Type_t::u32, element::Type_t::i64, element::Type_t::i32>(),
|
||||
generateParamsForStableTest<element::Type_t::u64, element::Type_t::i32, element::Type_t::i64>(),
|
||||
generateParamsForStableTest<element::Type_t::f16, element::Type_t::i64, element::Type_t::i32>(),
|
||||
generateParamsForStableTest<element::Type_t::f32, element::Type_t::i32, element::Type_t::i32>(),
|
||||
};
|
||||
std::vector<TopKParams> combinedParams;
|
||||
for (auto& params : generatedParams) {
|
||||
std::move(params.begin(), params.end(), std::back_inserter(combinedParams));
|
||||
}
|
||||
return combinedParams;
|
||||
}
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(smoke_TopK_With_Hardcoded_Refs,
|
||||
ReferenceTopKv11StableTest,
|
||||
testing::ValuesIn(generateCombinedParamsForStableTest()),
|
||||
ReferenceTopKv11StableTest::getTestCaseName);
|
||||
|
||||
} // namespace
|
||||
|
Loading…
Reference in New Issue
Block a user