.dot` - annotation of affinities per graph. This file is written to the disk during execution of ICNNNetwork::LoadNetwork() for heterogeneous plugin
-@snippet openvino/docs/snippets/HETERO3.cpp part3
+@snippet snippets/HETERO3.cpp part3
You can use GraphViz* utility or converters to `.png` formats. On Ubuntu* operating system, you can use the following utilities:
* `sudo apt-get install xdot`
diff --git a/docs/IE_DG/supported_plugins/MULTI.md b/docs/IE_DG/supported_plugins/MULTI.md
index 32a9555b380..a3166c3de8e 100644
--- a/docs/IE_DG/supported_plugins/MULTI.md
+++ b/docs/IE_DG/supported_plugins/MULTI.md
@@ -32,11 +32,11 @@ You can use name of the configuration directly as a string, or use MultiDeviceCo
Basically, there are three ways to specify the devices to be use by the "MULTI":
-@snippet openvino/docs/snippets/MULTI0.cpp part0
+@snippet snippets/MULTI0.cpp part0
Notice that the priorities of the devices can be changed in real-time for the executable network:
-@snippet openvino/docs/snippets/MULTI1.cpp part1
+@snippet snippets/MULTI1.cpp part1
Finally, there is a way to specify number of requests that the multi-device will internally keep for each device.
Say if your original app was running 4 cameras with 4 inference requests now you would probably want to share these 4 requests between 2 devices used in the MULTI. The easiest way is to specify a number of requests for each device using parentheses: "MULTI:CPU(2),GPU(2)" and use the same 4 requests in your app. However, such an explicit configuration is not performance portable and hence not recommended. Instead, the better way is to configure the individual devices and query the resulting number of requests to be used in the application level (see [Configuring the Individual Devices and Creating the Multi-Device On Top](#configuring-the-individual-devices-and-creating-the-multi-device-on-top)).
@@ -55,7 +55,7 @@ Available devices:
```
Simple programmatic way to enumerate the devices and use with the multi-device is as follows:
-@snippet openvino/docs/snippets/MULTI2.cpp part2
+@snippet snippets/MULTI2.cpp part2
Beyond trivial "CPU", "GPU", "HDDL" and so on, when multiple instances of a device are available the names are more qualified.
For example this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:
@@ -68,13 +68,13 @@ For example this is how two Intel® Movidius™ Myriad™ X sticks are listed wi
So the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480".
Accordingly, the code that loops over all available devices of "MYRIAD" type only is below:
-@snippet openvino/docs/snippets/MULTI3.cpp part3
+@snippet snippets/MULTI3.cpp part3
## Configuring the Individual Devices and Creating the Multi-Device On Top
As discussed in the first section, you shall configure each individual device as usual and then just create the "MULTI" device on top:
-@snippet openvino/docs/snippets/MULTI4.cpp part4
+@snippet snippets/MULTI4.cpp part4
Alternatively, you can combine all the individual device settings into single config and load that, allowing the multi-device plugin to parse and apply that to the right devices. See code example in the next section.
@@ -84,7 +84,7 @@ See section of the [Using the multi-device with OpenVINO samples and benchmarkin
## Querying the Optimal Number of Inference Requests
Notice that until R2 you had to calculate number of requests in your application for any device, e.g. you had to know that Intel® Vision Accelerator Design with Intel® Movidius™ VPUs required at least 32 inference requests to perform well. Now you can use the new GetMetric API to query the optimal number of requests. Similarly, when using the multi-device you don't need to sum over included devices yourself, you can query metric directly:
-@snippet openvino/docs/snippets/MULTI5.cpp part5
+@snippet snippets/MULTI5.cpp part5
## Using the Multi-Device with OpenVINO Samples and Benchmarking the Performance
Notice that every OpenVINO sample that supports "-d" (which stays for "device") command-line option transparently accepts the multi-device.
diff --git a/docs/IE_PLUGIN_DG/Doxyfile b/docs/IE_PLUGIN_DG/Doxyfile
index d72cbe5b9fc..3d66d22b4a2 100644
--- a/docs/IE_PLUGIN_DG/Doxyfile
+++ b/docs/IE_PLUGIN_DG/Doxyfile
@@ -844,11 +844,7 @@ EXCLUDE_SYMLINKS = NO
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*
-EXCLUDE_PATTERNS = cnn_network_ngraph_impl.hpp \
- ie_imemory_state_internal.hpp \
- ie_memory_state_internal.hpp \
- ie_memory_state_base.hpp \
- generic_ie.hpp \
+EXCLUDE_PATTERNS = generic_ie.hpp \
function_name.hpp \
macro_overload.hpp
diff --git a/docs/IE_PLUGIN_DG/ExecutableNetwork.md b/docs/IE_PLUGIN_DG/ExecutableNetwork.md
index a52872946c2..2685c518a0e 100644
--- a/docs/IE_PLUGIN_DG/ExecutableNetwork.md
+++ b/docs/IE_PLUGIN_DG/ExecutableNetwork.md
@@ -92,7 +92,7 @@ Returns a metric value for a metric with the name `name`. A metric is a static
@snippet src/template_executable_network.cpp executable_network:get_metric
-The IE_SET_METRIC helper macro sets metric value and checks that the actual metric type matches a type of the specified value.
+The IE_SET_METRIC_RETURN helper macro sets metric value and checks that the actual metric type matches a type of the specified value.
### `GetConfig()`
diff --git a/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md b/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md
index 9ff8088a366..c00507d6c37 100644
--- a/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md
+++ b/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md
@@ -1,11 +1,11 @@
-# Representation of low-precision models
+# Representation of low-precision models {#lp_representation}
The goal of this document is to describe how optimized models are represented in OpenVINO Intermediate Representation (IR) and provide guidance on interpretation rules for such models at runtime.
Currently, there are two groups of optimization methods that can influence on the IR after applying them to the full-precision model:
- **Sparsity**. It is represented by zeros inside the weights and this is up to the hardware plugin how to interpret these zeros (use weights as is or apply special compression algorithms and sparse arithmetic). No additional mask is provided with the model.
- **Quantization**. The rest of this document is dedicated to the representation of quantized models.
## Representation of quantized models
-The OpenVINO Toolkit represents all the quantized models using the so-called FakeQuantize operation (see the description in [this document](../MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md)). This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization process which happens at runtime.
+The OpenVINO Toolkit represents all the quantized models using the so-called FakeQuantize operation (see the description in [this document](@ref openvino_docs_ops_quantization_FakeQuantize_1)). This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization process which happens at runtime.
In order to be able to execute a particular DL operation in low-precision all its inputs should be quantized i.e. should have FakeQuantize between operation and data blobs. The figure below shows an example of quantized Convolution which contains two FakeQuantize nodes: one for weights and one for activations (bias is quantized using the same parameters).
![quantized_convolution]
Figure 1. Example of quantized Convolution operation.
diff --git a/docs/IE_PLUGIN_DG/QuantizedNetworks.md b/docs/IE_PLUGIN_DG/QuantizedNetworks.md
index 6e6cdd337b1..c327c3775fb 100644
--- a/docs/IE_PLUGIN_DG/QuantizedNetworks.md
+++ b/docs/IE_PLUGIN_DG/QuantizedNetworks.md
@@ -3,13 +3,13 @@
One of the feature of Inference Engine is the support of quantized networks with different precisions: INT8, INT4, etc.
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
All quantized networks which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
-For more details about low-precision model representation please refer to this [document](LowPrecisionModelRepresentation.md).
+For more details about low-precision model representation please refer to this [document](@ref lp_representation).
### Interpreting FakeQuantize at runtime
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
- Independently based on the definition of *FakeQuantize* operation.
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
-such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](../IE_DG/Int8Inference.md).
+such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](@ref openvino_docs_IE_DG_Int8Inference).
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.
diff --git a/docs/IE_PLUGIN_DG/layout.xml b/docs/IE_PLUGIN_DG/layout.xml
index 667785db71c..3dc629d959c 100644
--- a/docs/IE_PLUGIN_DG/layout.xml
+++ b/docs/IE_PLUGIN_DG/layout.xml
@@ -17,8 +17,10 @@
-
-
+
+
+
+
diff --git a/docs/doxygen/ie_c_api.config b/docs/doxygen/ie_c_api.config
index e9678615081..af0d36f14a2 100644
--- a/docs/doxygen/ie_c_api.config
+++ b/docs/doxygen/ie_c_api.config
@@ -2,16 +2,17 @@
EXCLUDE_SYMBOLS = INFERENCE_ENGINE_C_API_EXTERN \
INFERENCE_ENGINE_C_API \
+ INFERENCE_ENGINE_C_API_CALLBACK \
IE_NODISCARD
PREDEFINED = "__attribute__(x)=" \
"__VA_ARGS__=" \
"INFERENCE_ENGINE_C_API_EXTERN=" \
+ "INFERENCE_ENGINE_C_API_CALLBACK=" \
"INFERENCE_ENGINE_C_API=" \
"IE_NODISCARD=" \
"__cdecl=" \
"__declspec(x)=" \
- "__GNUC__=" \
"_WIN32"
FILE_PATTERNS = *.h
diff --git a/docs/doxygen/ie_docs.config b/docs/doxygen/ie_docs.config
index 48dca68bef8..e9d52248969 100644
--- a/docs/doxygen/ie_docs.config
+++ b/docs/doxygen/ie_docs.config
@@ -903,8 +903,8 @@ EXCLUDE_PATTERNS = */temp/* \
# exclude all test directories use the pattern */test/*
EXCLUDE_SYMBOLS = InferenceEngine::details \
+ InferenceEngine::gpu::details \
PRECISION_NAME \
- TBLOB_TOP_RESULT \
CASE \
CASE2 \
_CONFIG_KEY \
@@ -929,24 +929,26 @@ EXCLUDE_SYMBOLS = InferenceEngine::details \
INFERENCE_ENGINE_API_CPP \
INFERENCE_ENGINE_API_CLASS \
INFERENCE_ENGINE_DEPRECATED \
- INFERENCE_ENGINE_NN_BUILDER_API_CLASS \
- INFERENCE_ENGINE_NN_BUILDER_DEPRECATED \
IE_SUPPRESS_DEPRECATED_START \
IE_SUPPRESS_DEPRECATED_END \
IE_SUPPRESS_DEPRECATED_START_WIN \
IE_SUPPRESS_DEPRECATED_END_WIN \
IE_SUPPRESS_DEPRECATED_END_WIN \
INFERENCE_ENGINE_INTERNAL \
- INFERENCE_ENGINE_INTERNAL_CNNLAYER_CLASS \
IE_DO_PRAGMA \
- REG_VALIDATOR_FOR
+ parallel_* \
+ for_* \
+ splitter \
+ InferenceEngine::parallel_* \
+ NOMINMAX \
+ TBB_PREVIEW_NUMA_SUPPORT \
+ IE_THREAD_*
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include
# command).
-EXAMPLE_PATH = template_extension \
- ../inference-engine/samples
+EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@"
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
diff --git a/docs/doxygen/ie_plugin_api.config b/docs/doxygen/ie_plugin_api.config
index 4d6dea7992e..51b6a385dc0 100644
--- a/docs/doxygen/ie_plugin_api.config
+++ b/docs/doxygen/ie_plugin_api.config
@@ -9,7 +9,12 @@ GENERATE_TAGFILE = "@DOCS_BINARY_DIR@/ie_plugin_api.tag"
EXTRACT_LOCAL_CLASSES = NO
INPUT = "@DOCS_BINARY_DIR@/docs/IE_PLUGIN_DG" \
- "@IE_SOURCE_DIR@/src/plugin_api"
+ "@IE_SOURCE_DIR@/src/plugin_api" \
+ "@IE_SOURCE_DIR@/src/transformations/include" \
+ "@OpenVINO_MAIN_SOURCE_DIR@/openvino/itt/include/openvino"
+
+
+RECURSIVE = YES
FILE_PATTERNS = *.c \
*.cpp \
@@ -18,21 +23,20 @@ FILE_PATTERNS = *.c \
*.hpp \
*.md
-EXCLUDE_PATTERNS = cnn_network_ngraph_impl.hpp \
- ie_imemory_state_internal.hpp \
- ie_memory_state_internal.hpp \
- ie_memory_state_base.hpp \
- convert_function_to_cnn_network.hpp \
- generic_ie.hpp
+EXCLUDE_PATTERNS = generic_ie.hpp
-EXCLUDE_SYMBOLS =
+EXCLUDE_SYMBOLS = InferenceEngine::details
+
+TAGFILES = @DOCS_BINARY_DIR@/ie_api.tag=.."
EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/include" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src/CMakeLists.txt" \
- "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/"
- CMakeLists.txt \
- "@CMAKE_CURRENT_SOURCE_DIR@/examples"
+ "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/CMakeLists.txt" \
+ "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/transformations" \
+ "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/shared_tests_instances/" \
+ "@CMAKE_CURRENT_SOURCE_DIR@/snippets"
+ "@IE_SOURCE_DIR@/tests/functional/plugin/shared/include" \
EXAMPLE_PATTERNS = *.cpp \
*.hpp
@@ -41,12 +45,17 @@ ENUM_VALUES_PER_LINE = 1
EXPAND_ONLY_PREDEF = YES
-PREDEFINED = INFERENCE_ENGINE_API \
- INFERENCE_ENGINE_API_CPP \
- INFERENCE_ENGINE_API_CLASS \
- INFERENCE_ENGINE_DEPRECATED \
- IE_SUPPRESS_DEPRECATED_START \
- IE_SUPPRESS_DEPRECATED_END \
- IE_SUPPRESS_DEPRECATED_START_WIN \
- IE_SUPPRESS_DEPRECATED_END_WIN \
- IE_THREAD=IE_THREAD_TBB
+PREDEFINED = "INFERENCE_ENGINE_API=" \
+ "INFERENCE_ENGINE_API_CPP=" \
+ "INFERENCE_ENGINE_API_CLASS=" \
+ "INFERENCE_ENGINE_DEPRECATED=" \
+ "inference_engine_transformations_EXPORTS" \
+ "TRANSFORMATIONS_API=" \
+ "NGRAPH_HELPER_DLL_EXPORT=" \
+ "NGRAPH_HELPER_DLL_IMPORT=" \
+ "IE_SUPPRESS_DEPRECATED_START=" \
+ "IE_SUPPRESS_DEPRECATED_END=" \
+ "IE_SUPPRESS_DEPRECATED_START_WIN=" \
+ "IE_SUPPRESS_DEPRECATED_END_WIN=" \
+ "IE_THREAD=IE_THREAD_TBB" \
+ "NGRAPH_RTTI_DECLARATION="
diff --git a/docs/doxygen/ie_plugin_api.xml b/docs/doxygen/ie_plugin_api.xml
index b2839444af8..d7617c9a94b 100644
--- a/docs/doxygen/ie_plugin_api.xml
+++ b/docs/doxygen/ie_plugin_api.xml
@@ -16,8 +16,10 @@
-
-
+
+
+
+
diff --git a/docs/doxygen/openvino_docs.xml b/docs/doxygen/openvino_docs.xml
index 8af262216c5..42838a0f9a4 100644
--- a/docs/doxygen/openvino_docs.xml
+++ b/docs/doxygen/openvino_docs.xml
@@ -124,6 +124,7 @@
+
diff --git a/docs/install_guides/movidius-programming-guide.md b/docs/install_guides/movidius-programming-guide.md
index b2b9ef0cb99..184910a1471 100644
--- a/docs/install_guides/movidius-programming-guide.md
+++ b/docs/install_guides/movidius-programming-guide.md
@@ -18,11 +18,11 @@ The structure should hold:
1. A pointer to an inference request.
2. An ID to keep track of the request.
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part0
+@snippet snippets/movidius-programming-guide.cpp part0
### Declare a Vector of Requests
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part1
+@snippet snippets/movidius-programming-guide.cpp part1
Declare and initialize 2 mutex variables:
1. For each request
@@ -34,9 +34,9 @@ Conditional variable indicates when at most 8 requests are done at a time.
For inference requests, use the asynchronous IE API calls:
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part2
+@snippet snippets/movidius-programming-guide.cpp part2
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part3
+@snippet snippets/movidius-programming-guide.cpp part3
### Create a Lambda Function
@@ -45,7 +45,7 @@ Lambda Function enables the parsing and display of results.
Inside the Lambda body use the completion callback function:
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part4
+@snippet snippets/movidius-programming-guide.cpp part4
## Additional Resources
diff --git a/docs/optimization_guide/dldt_optimization_guide.md b/docs/optimization_guide/dldt_optimization_guide.md
index d523ce66dac..37299bdf1a2 100644
--- a/docs/optimization_guide/dldt_optimization_guide.md
+++ b/docs/optimization_guide/dldt_optimization_guide.md
@@ -332,7 +332,7 @@ In many cases, a network expects a pre-processed image, so make sure you do not
- Model Optimizer can efficiently bake the mean and normalization (scale) values into the model (for example, weights of the first convolution). See Model Optimizer Knobs Related to Performance.
- If regular 8-bit per channel images are your native media (for instance, decoded frames), do not convert to the `FP32` on your side, as this is something that plugins can accelerate. Use the `InferenceEngine::Precision::U8` as your input format:
-@snippet openvino/docs/snippets/dldt_optimization_guide1.cpp part1
+@snippet snippets/dldt_optimization_guide1.cpp part1
Note that in many cases, you can directly share the (input) data with the Inference Engine.
@@ -342,15 +342,15 @@ The general approach for sharing data between Inference Engine and media/graphic
For Intel MSS, it is recommended to perform a viable pre-processing, for example, crop/resize, and then convert to RGB again with the [Video Processing Procedures (VPP)](https://software.intel.com/en-us/node/696108). Then lock the result and create an Inference Engine blob on top of that. The resulting pointer can be used for the `SetBlob`:
-@snippet openvino/docs/snippets/dldt_optimization_guide2.cpp part2
+@snippet snippets/dldt_optimization_guide2.cpp part2
**WARNING**: The `InferenceEngine::NHWC` layout is not supported natively by most InferenceEngine plugins so internal conversion might happen.
-@snippet openvino/docs/snippets/dldt_optimization_guide3.cpp part3
+@snippet snippets/dldt_optimization_guide3.cpp part3
Alternatively, you can use RGBP (planar RGB) output from Intel MSS. This allows to wrap the (locked) result as regular NCHW which is generally friendly for most plugins (unlike NHWC). Then you can use it with `SetBlob` just like in previous example:
-@snippet openvino/docs/snippets/dldt_optimization_guide4.cpp part4
+@snippet snippets/dldt_optimization_guide4.cpp part4
The only downside of this approach is that VPP conversion to RGBP is not hardware accelerated (and performed on the GPU EUs). Also, it is available only on LInux.
@@ -362,7 +362,7 @@ Again, if the OpenCV and Inference Engine layouts match, the data can be wrapped
**WARNING**: The `InferenceEngine::NHWC` layout is not supported natively by most InferenceEngine plugins so internal conversion might happen.
-@snippet openvino/docs/snippets/dldt_optimization_guide5.cpp part5
+@snippet snippets/dldt_optimization_guide5.cpp part5
Notice that original `cv::Mat`/blobs cannot be used simultaneously by the application and the Inference Engine. Alternatively, the data that the pointer references to can be copied to unlock the original data and return ownership to the original API.
@@ -372,7 +372,7 @@ Infer Request based API offers two types of request: Sync and Async. The Sync is
More importantly, an infer request encapsulates the reference to the “executable” network and actual inputs/outputs. Now, when you load the network to the plugin, you get a reference to the executable network (you may consider that as a queue). Actual infer requests are created by the executable network:
-@snippet openvino/docs/snippets/dldt_optimization_guide6.cpp part6
+@snippet snippets/dldt_optimization_guide6.cpp part6
`GetBlob` is a recommend way to communicate with the network, as it internally allocates the data with right padding/alignment for the device. For example, the GPU inputs/outputs blobs are mapped to the host (which is fast) if the `GetBlob` is used. But if you called the `SetBlob`, the copy (from/to the blob you have set) into the internal GPU plugin structures will happen.
@@ -383,7 +383,7 @@ If your application simultaneously executes multiple infer requests:
- For the CPU, the best solution, you can use the CPU "throughput" mode.
- If latency is of more concern, you can try the `EXCLUSIVE_ASYNC_REQUESTS` [configuration option](../IE_DG/supported_plugins/CPU.md) that limits the number of the simultaneously executed requests for all (executable) networks that share the specific device to just one:
-@snippet openvino/docs/snippets/dldt_optimization_guide7.cpp part7
+@snippet snippets/dldt_optimization_guide7.cpp part7
For more information on the executable networks notation, see Request-Based API and “GetBlob” Idiom.
@@ -407,13 +407,13 @@ You can compare the pseudo-codes for the regular and async-based approaches:
- In the regular way, the frame is captured with OpenCV and then immediately processed:
-@snippet openvino/docs/snippets/dldt_optimization_guide8.cpp part8
+@snippet snippets/dldt_optimization_guide8.cpp part8

- In the "true" async mode, the `NEXT` request is populated in the main (application) thread, while the `CURRENT` request is processed:
-@snippet openvino/docs/snippets/dldt_optimization_guide9.cpp part9
+@snippet snippets/dldt_optimization_guide9.cpp part9

diff --git a/inference-engine/ie_bridges/c/include/c_api/ie_c_api.h b/inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
index 4801d6e2ea5..fa97fe254af 100644
--- a/inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
+++ b/inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
@@ -45,7 +45,7 @@
#endif
#ifndef INFERENCE_ENGINE_C_API_CALLBACK
-#define INFERENCE_ENGINE_C_API_CALLBACK
+ #define INFERENCE_ENGINE_C_API_CALLBACK
#endif
typedef struct ie_core ie_core_t;
@@ -59,39 +59,39 @@ typedef struct ie_blob ie_blob_t;
* @brief Represents an API version information that reflects the set of supported features
*/
typedef struct ie_version {
- char *api_version;
-}ie_version_t;
+ char *api_version; //!< A string representing Inference Engine version
+} ie_version_t;
/**
* @struct ie_core_version
* @brief Represents version information that describes devices and the inference engine runtime library
*/
typedef struct ie_core_version {
- size_t major;
- size_t minor;
- const char *device_name;
- const char *build_number;
- const char *description;
-}ie_core_version_t;
+ size_t major; //!< A major version
+ size_t minor; //!< A minor version
+ const char *device_name; //!< A device name
+ const char *build_number; //!< A build number
+ const char *description; //!< A device description
+} ie_core_version_t;
/**
* @struct ie_core_versions
* @brief Represents all versions information that describes all devices and the inference engine runtime library
*/
typedef struct ie_core_versions {
- ie_core_version_t *versions;
- size_t num_vers;
-}ie_core_versions_t;
+ ie_core_version_t *versions; //!< An array of device versions
+ size_t num_vers; //!< A number of versions in the array
+} ie_core_versions_t;
/**
* @struct ie_config
* @brief Represents configuration information that describes devices
*/
typedef struct ie_config {
- const char *name;
- const char *value;
- struct ie_config *next;
-}ie_config_t;
+ const char *name; //!< A configuration key
+ const char *value; //!< A configuration value
+ struct ie_config *next; //!< A pointer to the next configuration value
+} ie_config_t;
/**
* @struct ie_param
@@ -99,12 +99,12 @@ typedef struct ie_config {
*/
typedef struct ie_param {
union {
- char *params;
- unsigned int number;
- unsigned int range_for_async_infer_request[3];
- unsigned int range_for_streams[2];
+ char *params;
+ unsigned int number;
+ unsigned int range_for_async_infer_request[3];
+ unsigned int range_for_streams[2];
};
-}ie_param_t;
+} ie_param_t;
/**
* @struct ie_param_config
@@ -113,57 +113,57 @@ typedef struct ie_param {
typedef struct ie_param_config {
char *name;
ie_param_t *param;
-}ie_param_config_t;
+} ie_param_config_t;
/**
* @struct desc
* @brief Represents detailed information for an error
*/
typedef struct desc {
- char msg[256];
-}desc_t;
+ char msg[256]; //!< A description message
+} desc_t;
/**
* @struct dimensions
* @brief Represents dimensions for input or output data
*/
typedef struct dimensions {
- size_t ranks;
- size_t dims[8];
-}dimensions_t;
+ size_t ranks; //!< A runk representing a number of dimensions
+ size_t dims[8]; //!< An array of dimensions
+} dimensions_t;
/**
* @enum layout_e
* @brief Layouts that the inference engine supports
*/
typedef enum {
- ANY = 0, // "any" layout
+ ANY = 0, //!< "ANY" layout
// I/O data layouts
- NCHW = 1,
- NHWC = 2,
- NCDHW = 3,
- NDHWC = 4,
+ NCHW = 1, //!< "NCHW" layout
+ NHWC = 2, //!< "NHWC" layout
+ NCDHW = 3, //!< "NCDHW" layout
+ NDHWC = 4, //!< "NDHWC" layout
// weight layouts
- OIHW = 64,
+ OIHW = 64, //!< "OIHW" layout
// Scalar
- SCALAR = 95,
+ SCALAR = 95, //!< "SCALAR" layout
// bias layouts
- C = 96,
+ C = 96, //!< "C" layout
// Single image layout (for mean image)
- CHW = 128,
+ CHW = 128, //!< "CHW" layout
// 2D
- HW = 192,
- NC = 193,
- CN = 194,
+ HW = 192, //!< "HW" layout
+ NC = 193, //!< "NC" layout
+ CN = 194, //!< "CN" layout
- BLOCKED = 200,
-}layout_e;
+ BLOCKED = 200, //!< "BLOCKED" layout
+} layout_e;
/**
* @enum precision_e
@@ -185,7 +185,7 @@ typedef enum {
U32 = 74, /**< 32bit unsigned integer value */
BIN = 71, /**< 1bit integer value */
CUSTOM = 80 /**< custom precision has it's own name and size of elements */
-}precision_e;
+} precision_e;
/**
* @struct tensor_desc
@@ -195,31 +195,31 @@ typedef struct tensor_desc {
layout_e layout;
dimensions_t dims;
precision_e precision;
-}tensor_desc_t;
+} tensor_desc_t;
/**
* @enum colorformat_e
* @brief Extra information about input color format for preprocessing
*/
typedef enum {
- RAW = 0u, ///< Plain blob (default), no extra color processing required
- RGB, ///< RGB color format
- BGR, ///< BGR color format, default in DLDT
- RGBX, ///< RGBX color format with X ignored during inference
- BGRX, ///< BGRX color format with X ignored during inference
- NV12, ///< NV12 color format represented as compound Y+UV blob
- I420, ///< I420 color format represented as compound Y+U+V blob
-}colorformat_e;
+ RAW = 0u, //!< Plain blob (default), no extra color processing required
+ RGB, //!< RGB color format
+ BGR, //!< BGR color format, default in DLDT
+ RGBX, //!< RGBX color format with X ignored during inference
+ BGRX, //!< BGRX color format with X ignored during inference
+ NV12, //!< NV12 color format represented as compound Y+UV blob
+ I420, //!< I420 color format represented as compound Y+U+V blob
+} colorformat_e;
/**
* @enum resize_alg_e
* @brief Represents the list of supported resize algorithms.
*/
typedef enum {
- NO_RESIZE = 0,
- RESIZE_BILINEAR,
- RESIZE_AREA
-}resize_alg_e;
+ NO_RESIZE = 0, //!< "No resize" mode
+ RESIZE_BILINEAR, //!< "Bilinear resize" mode
+ RESIZE_AREA //!< "Area resize" mode
+} resize_alg_e;
/**
* @enum IEStatusCode
@@ -242,19 +242,19 @@ typedef enum {
NOT_ALLOCATED = -10,
INFER_NOT_STARTED = -11,
NETWORK_NOT_READ = -12
-}IEStatusCode;
+} IEStatusCode;
/**
* @struct roi_t
* @brief This structure describes roi data.
*/
typedef struct roi {
- size_t id; // ID of a roi
- size_t posX; // W upper left coordinate of roi
- size_t posY; // H upper left coordinate of roi
- size_t sizeX; // W size of roi
- size_t sizeY; // H size of roi
-}roi_t;
+ size_t id; //!< ID of a roi
+ size_t posX; //!< W upper left coordinate of roi
+ size_t posY; //!< H upper left coordinate of roi
+ size_t sizeX; //!< W size of roi
+ size_t sizeY; //!< H size of roi
+} roi_t;
/**
* @struct input_shape
@@ -263,7 +263,7 @@ typedef struct roi {
typedef struct input_shape {
char *name;
dimensions_t shape;
-}input_shape_t;
+} input_shape_t;
/**
* @struct input_shapes
@@ -272,7 +272,7 @@ typedef struct input_shape {
typedef struct input_shapes {
input_shape_t *shapes;
size_t shape_num;
-}input_shapes_t;
+} input_shapes_t;
/**
* @struct ie_blob_buffer
@@ -280,10 +280,10 @@ typedef struct input_shapes {
*/
typedef struct ie_blob_buffer {
union {
- void *buffer; // buffer can be written
- const void *cbuffer; // cbuffer is read-only
+ void *buffer; //!< buffer can be written
+ const void *cbuffer; //!< cbuffer is read-only
};
-}ie_blob_buffer_t;
+} ie_blob_buffer_t;
/**
* @struct ie_complete_call_back
@@ -292,7 +292,7 @@ typedef struct ie_blob_buffer {
typedef struct ie_complete_call_back {
void (INFERENCE_ENGINE_C_API_CALLBACK *completeCallBackFunc)(void *args);
void *args;
-}ie_complete_call_back_t;
+} ie_complete_call_back_t;
/**
* @struct ie_available_devices
@@ -301,7 +301,7 @@ typedef struct ie_complete_call_back {
typedef struct ie_available_devices {
char **devices;
size_t num_devices;
-}ie_available_devices_t;
+} ie_available_devices_t;
/**
* @brief Returns number of version that is exported. Use the ie_version_free() to free memory.
@@ -317,7 +317,7 @@ INFERENCE_ENGINE_C_API(void) ie_version_free(ie_version_t *version);
/**
* @brief Release the memory allocated by ie_param_t.
- * @param version A pointer to the ie_param_t to free memory.
+ * @param param A pointer to the ie_param_t to free memory.
*/
INFERENCE_ENGINE_C_API(void) ie_param_free(ie_param_t *param);
@@ -662,6 +662,7 @@ INFERENCE_ENGINE_C_API(void) ie_network_free(ie_network_t **network);
/**
* @brief Get name of network.
* @ingroup Network
+ * @param network A pointer to the instance of the ie_network_t to get a name from.
* @param name Name of the network.
* @return Status code of the operation: OK(0) for success.
*/
@@ -729,7 +730,7 @@ INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_get_input_layout(co
INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_set_input_layout(ie_network_t *network, const char *input_name, const layout_e l);
/**
- * @Gets dimensions/shape of the input data with reversed order.
+ * @brief Gets dimensions/shape of the input data with reversed order.
* @ingroup Network
* @param network A pointer to ie_network_t instance.
* @param input_name Name of input data.
@@ -743,11 +744,10 @@ INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_get_input_dims(cons
* @ingroup Network
* @param network A pointer to ie_network_t instance.
* @param input_name Name of input data.
- * @parm resize_alg_result The pointer to the resize algorithm used for input blob creation.
+ * @param resize_alg_result The pointer to the resize algorithm used for input blob creation.
* @return Status code of the operation: OK(0) for success.
*/
-INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_get_input_resize_algorithm(const ie_network_t *network, const char *input_name, \
- resize_alg_e *resize_alg_result);
+INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_get_input_resize_algorithm(const ie_network_t *network, const char *input_name, resize_alg_e *resize_alg_result);
/**
* @brief Sets resize algorithm to be used during pre-processing
@@ -1014,7 +1014,7 @@ INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_blob_get_layout(const ie_bl
INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_blob_get_precision(const ie_blob_t *blob, precision_e *prec_result);
/**
- * @Releases the memory occupied by the ie_blob_t pointer.
+ * @brief Releases the memory occupied by the ie_blob_t pointer.
* @ingroup Blob
* @param blob A pointer to the blob pointer to release memory.
*/
diff --git a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
index 11fe655de0d..75b05f78c5f 100644
--- a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
+++ b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
@@ -1,4 +1,4 @@
-# nGraph Function Python* Sample {#openvino_inference_engine_samples_ngraph_function_creation_sample_README}
+# nGraph Function Python* Sample {#openvino_inference_engine_ie_bridges_python_samples_ngraph_function_creation_sample_README}
This sample demonstrates how to execute an inference using ngraph::Function to create a network. The sample uses the LeNet classifications network as an example.
diff --git a/inference-engine/include/cpp/ie_cnn_network.h b/inference-engine/include/cpp/ie_cnn_network.h
index 9f3ea1949b8..fe959cdd47f 100644
--- a/inference-engine/include/cpp/ie_cnn_network.h
+++ b/inference-engine/include/cpp/ie_cnn_network.h
@@ -123,7 +123,6 @@ public:
* Wraps ICNNNetwork::setBatchSize
*
* @param size Size of batch to set
- * @return Status code of the operation
*/
virtual void setBatchSize(const size_t size) {
CALL_STATUS_FNC(setBatchSize, size);
diff --git a/inference-engine/include/cpp/ie_infer_request.hpp b/inference-engine/include/cpp/ie_infer_request.hpp
index c750a5d4c90..8cae1255188 100644
--- a/inference-engine/include/cpp/ie_infer_request.hpp
+++ b/inference-engine/include/cpp/ie_infer_request.hpp
@@ -83,7 +83,7 @@ public:
/**
* constructs InferRequest from the initialized shared_pointer
* @param request Initialized shared pointer to IInferRequest interface
- * @param plg Plugin to use. This is required to ensure that InferRequest can work properly even if plugin object is destroyed.
+ * @param splg Plugin to use. This is required to ensure that InferRequest can work properly even if plugin object is destroyed.
*/
explicit InferRequest(IInferRequest::Ptr request,
InferenceEngine::details::SharedObjectLoader::Ptr splg = {}):
diff --git a/inference-engine/include/cpp/ie_memory_state.hpp b/inference-engine/include/cpp/ie_memory_state.hpp
index 24fd4d7fa1f..cb45a159e2a 100644
--- a/inference-engine/include/cpp/ie_memory_state.hpp
+++ b/inference-engine/include/cpp/ie_memory_state.hpp
@@ -3,7 +3,9 @@
//
/**
- * @file
+ * @brief A header file that provides wrapper classes for IVariableState
+ *
+ * @file ie_memory_state.hpp
*/
#pragma once
@@ -25,8 +27,9 @@ class VariableState {
public:
/**
- * constructs VariableState from the initialized shared_pointer
+ * @brief constructs VariableState from the initialized shared_pointer
* @param pState Initialized shared pointer
+ * @param plg Optional: Plugin to use. This is required to ensure that VariableState can work properly even if plugin object is destroyed.
*/
explicit VariableState(IVariableState::Ptr pState, details::SharedObjectLoader::Ptr plg = {}) : actual(pState), plugin(plg) {
if (actual == nullptr) {
@@ -59,7 +62,7 @@ public:
* @copybrief IVariableState::GetState
*
* Wraps IVariableState::GetState
- * @return A blob representing a last state
+ * @return A blob representing a state
*/
Blob::CPtr GetState() const {
Blob::CPtr stateBlob;
@@ -67,7 +70,14 @@ public:
return stateBlob;
}
- INFERENCE_ENGINE_DEPRECATED("Use GetState function instead")
+ /**
+ * @copybrief IVariableState::GetLastState
+ * @deprecated Use IVariableState::SetState instead
+ *
+ * Wraps IVariableState::GetLastState
+ * @return A blob representing a last state
+ */
+ INFERENCE_ENGINE_DEPRECATED("Use VariableState::GetState function instead")
Blob::CPtr GetLastState() const {
return GetState();
}
@@ -83,8 +93,9 @@ public:
}
};
-/*
+/**
* @brief For compatibility reasons.
*/
using MemoryState = VariableState;
+
} // namespace InferenceEngine
diff --git a/inference-engine/include/gpu/gpu_context_api_dx.hpp b/inference-engine/include/gpu/gpu_context_api_dx.hpp
index 03d284b8c22..cbf959b9415 100644
--- a/inference-engine/include/gpu/gpu_context_api_dx.hpp
+++ b/inference-engine/include/gpu/gpu_context_api_dx.hpp
@@ -22,17 +22,17 @@ namespace InferenceEngine {
namespace gpu {
/**
-* @brief This class represents an abstraction for GPU plugin remote context
-* which is shared with Direct3D 11 device.
-* The plugin object derived from this class can be obtained either with
-* GetContext() method of Executable network or using CreateContext() Core call.
-* @note User can also obtain OpenCL context handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote context
+ * which is shared with Direct3D 11 device.
+ * The plugin object derived from this class can be obtained either with
+ * GetContext() method of Executable network or using CreateContext() Core call.
+ * @note User can also obtain OpenCL context handle from this class.
+ */
class D3DContext : public ClContext {
public:
/**
- * @brief A smart pointer to the D3DContext object
- */
+ * @brief A smart pointer to the D3DContext object
+ */
using Ptr = std::shared_ptr;
/**
@@ -47,16 +47,16 @@ public:
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which is shared with Direct3D 11 buffer.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can also obtain OpenCL buffer handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which is shared with Direct3D 11 buffer.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can also obtain OpenCL buffer handle from this class.
+ */
class D3DBufferBlob : public ClBufferBlob {
public:
/**
- * @brief A smart pointer to the D3DBufferBlob object
- */
+ * @brief A smart pointer to the D3DBufferBlob object
+ */
using Ptr = std::shared_ptr;
/**
@@ -77,16 +77,16 @@ public:
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which is shared with Direct3D 11 2D texture.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can also obtain OpenCL 2D image handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which is shared with Direct3D 11 2D texture.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can also obtain OpenCL 2D image handle from this class.
+ */
class D3DSurface2DBlob : public ClImage2DBlob {
public:
/**
- * @brief A smart pointer to the D3DSurface2DBlob object
- */
+ * @brief A smart pointer to the D3DSurface2DBlob object
+ */
using Ptr = std::shared_ptr;
/**
@@ -117,9 +117,14 @@ public:
};
/**
-* @brief This function is used to obtain a NV12 compound blob object from NV12 DXGI video decoder output.
-* The resulting compound contains two remote blobs for Y and UV planes of the surface.
-*/
+ * @brief This function is used to obtain a NV12 compound blob object from NV12 DXGI video decoder output.
+ * The resulting compound contains two remote blobs for Y and UV planes of the surface.
+ * @param height Height of Y plane
+ * @param width Widht of Y plane
+ * @param ctx A pointer to remote context
+ * @param nv12_surf A ID3D11Texture2D instance to create NV12 blob from
+ * @return NV12 remote blob
+ */
static inline Blob::Ptr make_shared_blob_nv12(size_t height, size_t width, RemoteContext::Ptr ctx, ID3D11Texture2D* nv12_surf) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
@@ -145,8 +150,12 @@ static inline Blob::Ptr make_shared_blob_nv12(size_t height, size_t width, Remot
}
/**
-* @brief This function is used to obtain remote context object from ID3D11Device
-*/
+ * @brief This function is used to obtain remote context object from ID3D11Device
+ * @param core Inference Engine Core object instance
+ * @param deviceName A name of to create a remote context for
+ * @param device A pointer to ID3D11Device to be used to create a remote context
+ * @return A shared remote context instance
+ */
static inline D3DContext::Ptr make_shared_context(Core& core, std::string deviceName, ID3D11Device* device) {
ParamMap contextParams = {
{ GPU_PARAM_KEY(CONTEXT_TYPE), GPU_PARAM_VALUE(VA_SHARED) },
@@ -156,8 +165,12 @@ static inline D3DContext::Ptr make_shared_context(Core& core, std::string device
}
/**
-* @brief This function is used to obtain remote blob object from ID3D11Buffer
-*/
+ * @brief This function is used to obtain remote blob object from ID3D11Buffer
+ * @param desc A tensor description which describes blob configuration
+ * @param ctx A shared pointer to a remote context
+ * @param buffer A pointer to ID3D11Buffer instance to create remote blob based on
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, ID3D11Buffer* buffer) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
@@ -172,14 +185,14 @@ static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::
}
/**
-* @brief This function is used to obtain remote blob object from ID3D11Texture2D
-* @param desc Tensor description
-* @param ctx the RemoteContext object whuch owns context for the blob to be created
-* @param surface Pointer to ID3D11Texture2D interface of the objects that owns NV12 texture
-* @param plane ID of the plane to be shared (0 or 1)
-* @return Smart pointer to created RemoteBlob object cast to base class
-* @note The underlying ID3D11Texture2D can also be a plane of output surface of DXGI video decoder
-*/
+ * @brief This function is used to obtain remote blob object from ID3D11Texture2D
+ * @param desc Tensor description
+ * @param ctx the RemoteContext object whuch owns context for the blob to be created
+ * @param surface Pointer to ID3D11Texture2D interface of the objects that owns NV12 texture
+ * @param plane ID of the plane to be shared (0 or 1)
+ * @return Smart pointer to created RemoteBlob object cast to base class
+ * @note The underlying ID3D11Texture2D can also be a plane of output surface of DXGI video decoder
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, ID3D11Texture2D* surface, uint32_t plane = 0) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
diff --git a/inference-engine/include/gpu/gpu_context_api_ocl.hpp b/inference-engine/include/gpu/gpu_context_api_ocl.hpp
index 489daa143a0..9bcdf0adbed 100644
--- a/inference-engine/include/gpu/gpu_context_api_ocl.hpp
+++ b/inference-engine/include/gpu/gpu_context_api_ocl.hpp
@@ -25,16 +25,16 @@ namespace InferenceEngine {
namespace gpu {
/**
-* @brief This class represents an abstraction for GPU plugin remote context
-* which is shared with OpenCL context object.
-* The plugin object derived from this class can be obtained either with
-* GetContext() method of Executable network or using CreateContext() Core call.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote context
+ * which is shared with OpenCL context object.
+ * The plugin object derived from this class can be obtained either with
+ * GetContext() method of Executable network or using CreateContext() Core call.
+ */
class ClContext : public RemoteContext, public details::param_map_obj_getter {
public:
/**
- * @brief A smart pointer to the ClContext object
- */
+ * @brief A smart pointer to the ClContext object
+ */
using Ptr = std::shared_ptr;
/**
@@ -63,14 +63,14 @@ public:
};
/**
-* @brief The basic class for all GPU plugin remote blob objects.
-* The OpenCL memory object handle (cl_mem) can be obtained from this class object.
-*/
+ * @brief The basic class for all GPU plugin remote blob objects.
+ * The OpenCL memory object handle (cl_mem) can be obtained from this class object.
+ */
class ClBlob : public RemoteBlob {
public:
/**
- * @brief A smart pointer to the ClBlob object
- */
+ * @brief A smart pointer to the ClBlob object
+ */
using Ptr = std::shared_ptr;
/**
@@ -81,16 +81,16 @@ public:
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which can be shared with user-supplied OpenCL buffer.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can obtain OpenCL buffer handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which can be shared with user-supplied OpenCL buffer.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can obtain OpenCL buffer handle from this class.
+ */
class ClBufferBlob : public ClBlob, public details::param_map_obj_getter {
public:
/**
- * @brief A smart pointer to the ClBufferBlob object
- */
+ * @brief A smart pointer to the ClBufferBlob object
+ */
using Ptr = std::shared_ptr;
/**
@@ -124,16 +124,16 @@ public:
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which can be shared with user-supplied OpenCL 2D Image.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can obtain OpenCL image handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which can be shared with user-supplied OpenCL 2D Image.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can obtain OpenCL image handle from this class.
+ */
class ClImage2DBlob : public ClBlob, public details::param_map_obj_getter {
public:
/**
- * @brief A smart pointer to the ClImage2DBlob object
- */
+ * @brief A smart pointer to the ClImage2DBlob object
+ */
using Ptr = std::shared_ptr;
/**
@@ -167,13 +167,13 @@ public:
};
/**
-* @brief This function is used to construct a NV12 compound blob object from two cl::Image2D wrapper objects.
-* The resulting compound contains two remote blobs for Y and UV planes of the surface.
-* @param ctx RemoteContext plugin object derived from ClContext class.
-* @param nv12_image_plane_y cl::Image2D object containing Y plane data.
-* @param nv12_image_plane_uv cl::Image2D object containing UV plane data.
-* @return Pointer to plugin-specific context class object, which is derived from RemoteContext.
-*/
+ * @brief This function is used to construct a NV12 compound blob object from two cl::Image2D wrapper objects.
+ * The resulting compound contains two remote blobs for Y and UV planes of the surface.
+ * @param ctx RemoteContext plugin object derived from ClContext class.
+ * @param nv12_image_plane_y cl::Image2D object containing Y plane data.
+ * @param nv12_image_plane_uv cl::Image2D object containing UV plane data.
+ * @return A shared remote blob instance
+ */
static inline Blob::Ptr make_shared_blob_nv12(RemoteContext::Ptr ctx, cl::Image2D& nv12_image_plane_y, cl::Image2D& nv12_image_plane_uv) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
@@ -201,8 +201,12 @@ static inline Blob::Ptr make_shared_blob_nv12(RemoteContext::Ptr ctx, cl::Image2
}
/**
-* @brief This function is used to obtain remote context object from user-supplied OpenCL context handle
-*/
+ * @brief This function is used to obtain remote context object from user-supplied OpenCL context handle
+ * @param core A reference to Inference Engine Core object
+ * @param deviceName A name of device to create a remote context for
+ * @param ctx A OpenCL context to be used to create shared remote context
+ * @return A shared remote context instance
+ */
static inline RemoteContext::Ptr make_shared_context(Core& core, std::string deviceName, cl_context ctx) {
ParamMap contextParams = {
{ GPU_PARAM_KEY(CONTEXT_TYPE), GPU_PARAM_VALUE(OCL) },
@@ -212,15 +216,22 @@ static inline RemoteContext::Ptr make_shared_context(Core& core, std::string dev
}
/**
-* @brief This function is used to create remote blob object within default GPU plugin OpenCL context
-*/
+ * @brief This function is used to create remote blob object within default GPU plugin OpenCL context
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, ClContext::Ptr ctx) {
return std::dynamic_pointer_cast(ctx->CreateBlob(desc));
}
/**
-* @brief This function is used to obtain remote blob object from user-supplied cl::Buffer wrapper object
-*/
+ * @brief This function is used to obtain remote blob object from user-supplied cl::Buffer wrapper object
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @param buffer A cl::Buffer object wrapped by a remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl::Buffer& buffer) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
@@ -235,8 +246,12 @@ static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::
}
/**
-* @brief This function is used to obtain remote blob object from user-supplied OpenCL buffer handle
-*/
+ * @brief This function is used to obtain remote blob object from user-supplied OpenCL buffer handle
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @param buffer A cl_mem object wrapped by a remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl_mem buffer) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
@@ -251,8 +266,12 @@ static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::
}
/**
-* @brief This function is used to obtain remote blob object from user-supplied cl::Image2D wrapper object
-*/
+ * @brief This function is used to obtain remote blob object from user-supplied cl::Image2D wrapper object
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @param buffer A cl::Image2D object wrapped by a remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl::Image2D& image) {
auto casted = std::dynamic_pointer_cast(ctx);
if (nullptr == casted) {
diff --git a/inference-engine/include/ie_blob.h b/inference-engine/include/ie_blob.h
index 6a6514e80c9..234a13528eb 100644
--- a/inference-engine/include/ie_blob.h
+++ b/inference-engine/include/ie_blob.h
@@ -125,6 +125,7 @@ public:
/**
* @brief Returns the tensor description
+ * @return A const reference to a tensor descriptor
*/
virtual const TensorDesc& getTensorDesc() const noexcept {
return tensorDesc;
@@ -132,6 +133,7 @@ public:
/**
* @brief Returns the tensor description
+ * @return A reference to a tensor descriptor
*/
virtual TensorDesc& getTensorDesc() noexcept {
return tensorDesc;
@@ -141,6 +143,8 @@ public:
* @brief By default, returns the total number of elements (a product of all the dims or 1 for scalar)
*
* Return value and its interpretation heavily depend on the blob type
+ *
+ * @return The total number of elements
*/
virtual size_t size() const noexcept {
if (tensorDesc.getLayout() == Layout::SCALAR) return 1;
@@ -149,6 +153,7 @@ public:
/**
* @brief Returns the size of the current Blob in bytes.
+ * @return Blob's size in bytes
*/
virtual size_t byteSize() const noexcept {
return size() * element_size();
@@ -158,9 +163,11 @@ public:
* @deprecated Cast to MemoryBlob and use its API instead.
* Blob class can represent compound blob, which do not refer to the only solid memory.
*
- * @brief Returns the number of bytes per element.
+ * @brief Provides the number of bytes per element.
*
* The overall Blob capacity is size() * element_size(). Abstract method.
+ *
+ * @return Returns the number of bytes per element
*/
virtual size_t element_size() const noexcept = 0;
@@ -175,6 +182,8 @@ public:
* @brief Releases previously allocated data.
*
* Abstract method.
+ *
+ * @return `True` if deallocation happens successfully, `false` otherwise.
*/
virtual bool deallocate() noexcept = 0;
@@ -243,13 +252,14 @@ protected:
*/
virtual void* getHandle() const noexcept = 0;
+ /// private
template
friend class TBlobProxy;
};
/**
* @brief Helper cast function to work with shared Blob objects
- *
+ * @param blob A blob to cast
* @return shared_ptr to the type T. Returned shared_ptr shares ownership of the object with the
* input Blob::Ptr
*/
@@ -262,7 +272,7 @@ std::shared_ptr as(const Blob::Ptr& blob) noexcept {
/**
* @brief Helper cast function to work with shared Blob objects
- *
+ * @param blob A blob to cast
* @return shared_ptr to the type const T. Returned shared_ptr shares ownership of the object with
* the input Blob::Ptr
*/
@@ -320,6 +330,7 @@ public:
/**
* @brief Returns the total number of elements, which is a product of all the dimensions
+ * @return The total number of elements
*/
size_t size() const noexcept override {
if (tensorDesc.getLayout() == Layout::SCALAR) return 1;
@@ -464,6 +475,7 @@ protected:
*/
void* getHandle() const noexcept override = 0;
+ /// private
template
friend class TBlobProxy;
};
@@ -779,6 +791,11 @@ protected:
return _handle.get();
}
+ /**
+ * @brief Creates a blob from the existing blob with a given ROI
+ * @param origBlob An original blob
+ * @param roi A ROI object
+ */
TBlob(const TBlob& origBlob, const ROI& roi) :
MemoryBlob(make_roi_desc(origBlob.getTensorDesc(), roi, true)),
_allocator(origBlob._allocator) {
diff --git a/inference-engine/include/ie_common.h b/inference-engine/include/ie_common.h
index 79f9a88b790..cdd757103a4 100644
--- a/inference-engine/include/ie_common.h
+++ b/inference-engine/include/ie_common.h
@@ -91,6 +91,13 @@ enum Layout : uint8_t {
BLOCKED = 200, //!< A blocked layout
};
+
+/**
+ * @brief Prints a string representation of InferenceEngine::Layout to a stream
+ * @param out An output stream to send to
+ * @param p A layout value to print to a stream
+ * @return A reference to the `out` stream
+ */
inline std::ostream& operator<<(std::ostream& out, const Layout& p) {
switch (p) {
#define PRINT_LAYOUT(name) \
@@ -131,6 +138,13 @@ enum ColorFormat : uint32_t {
NV12, ///< NV12 color format represented as compound Y+UV blob
I420, ///< I420 color format represented as compound Y+U+V blob
};
+
+/**
+ * @brief Prints a string representation of InferenceEngine::ColorFormat to a stream
+ * @param out An output stream to send to
+ * @param fmt A color format value to print to a stream
+ * @return A reference to the `out` stream
+ */
inline std::ostream& operator<<(std::ostream& out, const ColorFormat& fmt) {
switch (fmt) {
#define PRINT_COLOR_FORMAT(name) \
@@ -235,7 +249,6 @@ struct ResponseDesc {
char msg[4096] = {};
};
-
/**
* @brief Response structure encapsulating information about supported layer
*/
@@ -312,13 +325,14 @@ class NotAllocated : public std::logic_error {
class InferNotStarted : public std::logic_error {
using std::logic_error::logic_error;
};
-} // namespace InferenceEngine
/** @brief This class represents StatusCode::NETWORK_NOT_READ exception */
class NetworkNotRead : public std::logic_error {
using std::logic_error::logic_error;
};
+} // namespace InferenceEngine
+
#if defined(_WIN32)
#define __PRETTY_FUNCTION__ __FUNCSIG__
#else
diff --git a/inference-engine/include/ie_compound_blob.h b/inference-engine/include/ie_compound_blob.h
index ff5d71e4078..526402b9dfd 100644
--- a/inference-engine/include/ie_compound_blob.h
+++ b/inference-engine/include/ie_compound_blob.h
@@ -49,12 +49,14 @@ public:
explicit CompoundBlob(std::vector&& blobs);
/**
- * @brief Always returns 0
+ * @brief Always returns `0`
+ * @return Returns `0`
*/
size_t byteSize() const noexcept override;
/**
- * @brief Always returns 0
+ * @brief Always returns `0`
+ * @return Returns `0`
*/
size_t element_size() const noexcept override;
@@ -65,7 +67,7 @@ public:
/**
* @brief No operation is performed. Compound blob does not allocate/deallocate any data
- * @return false
+ * @return Returns `false`
*/
bool deallocate() noexcept override;
diff --git a/inference-engine/include/ie_iexecutable_network.hpp b/inference-engine/include/ie_iexecutable_network.hpp
index 8e7c5fa0bef..f919547295c 100644
--- a/inference-engine/include/ie_iexecutable_network.hpp
+++ b/inference-engine/include/ie_iexecutable_network.hpp
@@ -46,7 +46,7 @@ public:
* This method need to be called to find output names for using them later
* when calling InferenceEngine::InferRequest::GetBlob or InferenceEngine::InferRequest::SetBlob
*
- * @param out Reference to the ::ConstOutputsDataMap object
+ * @param out Reference to the InferenceEngine::ConstOutputsDataMap object
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
*/
@@ -55,11 +55,11 @@ public:
/**
* @brief Gets the executable network input Data node information.
*
- * The received info is stored in the given ::ConstInputsDataMap object.
+ * The received info is stored in the given InferenceEngine::ConstInputsDataMap object.
* This method need to be called to find out input names for using them later
* when calling InferenceEngine::InferRequest::SetBlob
*
- * @param inputs Reference to ::ConstInputsDataMap object.
+ * @param inputs Reference to InferenceEngine::ConstInputsDataMap object.
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
*/
diff --git a/inference-engine/include/ie_imemory_state.hpp b/inference-engine/include/ie_imemory_state.hpp
index 2e44350b5fa..a5f52ae8251 100644
--- a/inference-engine/include/ie_imemory_state.hpp
+++ b/inference-engine/include/ie_imemory_state.hpp
@@ -20,7 +20,7 @@ namespace InferenceEngine {
/**
* @interface IVariableState
- * @brief manages data for reset operations
+ * @brief Manages data for reset operations
*/
class IVariableState : public details::no_copy {
public:
@@ -30,8 +30,8 @@ public:
using Ptr = std::shared_ptr;
/**
- * @brief Gets name of current memory state, if length of array is not enough name is truncated by len, null
- * terminator is inserted as well. As memory state name variable_id from according ReadValue used.
+ * @brief Gets name of current variable state, if length of array is not enough name is truncated by len, null
+ * terminator is inserted as well. As variable state name `variable_id` from according `ReadValue` used.
*
* @param name preallocated buffer for receiving name
* @param len Length of the buffer
@@ -41,7 +41,7 @@ public:
virtual StatusCode GetName(char* name, size_t len, ResponseDesc* resp) const noexcept = 0;
/**
- * @brief Reset internal memory state for relevant infer request, to a value specified as default for according ReadValue node
+ * @brief Reset internal variable state for relevant infer request, to a value specified as default for according ReadValue node
*
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success*
@@ -53,26 +53,37 @@ public:
*
* This method can fail if Blob size does not match the internal state size or precision
*
- * @param newState is the data to use as new state
+ * @param newState The data to use as new state
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
*/
virtual StatusCode SetState(Blob::Ptr newState, ResponseDesc* resp) noexcept = 0;
/**
- * @brief Returns the value of the memory state.
+ * @brief Returns the value of the variable state.
*
- * @param lastState
+ * @param state A reference to a blob containing a variable state
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
- * */
+ */
INFERENCE_ENGINE_DEPRECATED("Use GetState function instead")
- virtual StatusCode GetLastState(Blob::CPtr& state, ResponseDesc* resp) const noexcept {return GetState(state, resp);}
+ virtual StatusCode GetLastState(Blob::CPtr& state, ResponseDesc* resp) const noexcept {
+ return GetState(state, resp);
+ }
+
+ /**
+ * @brief Returns the value of the variable state.
+ *
+ * @param state A reference to a blob containing a variable state
+ * @param resp Optional: pointer to an already allocated object to contain information in case of failure
+ * @return Status code of the operation: InferenceEngine::OK (0) for success
+ */
virtual StatusCode GetState(Blob::CPtr& state, ResponseDesc* resp) const noexcept = 0;
};
-/*
+/**
* @brief For compatibility reasons.
*/
using IMemoryState = IVariableState;
+
} // namespace InferenceEngine
\ No newline at end of file
diff --git a/inference-engine/include/ie_input_info.hpp b/inference-engine/include/ie_input_info.hpp
index a1760d8a009..5d6b8f86803 100644
--- a/inference-engine/include/ie_input_info.hpp
+++ b/inference-engine/include/ie_input_info.hpp
@@ -125,6 +125,7 @@ public:
/**
* @brief Returns the tensor descriptor
+ * @return A const reference to a tensor descriptor
*/
const TensorDesc& getTensorDesc() const {
if (!_inputData) {
diff --git a/inference-engine/include/ie_layouts.h b/inference-engine/include/ie_layouts.h
index a544231092b..219dd6b9d1f 100644
--- a/inference-engine/include/ie_layouts.h
+++ b/inference-engine/include/ie_layouts.h
@@ -130,6 +130,11 @@ public:
bool operator!=(const BlockingDesc& rhs) const;
protected:
+ /**
+ * @brief Fills tensor descriptor based on blocking dimensions and specific order
+ * @param blocked_dims A vector representing blocking dimensions
+ * @param order A vector with specific dims order
+ */
void fillDesc(const SizeVector& blocked_dims, const SizeVector& order);
private:
@@ -330,6 +335,14 @@ struct ROI {
ROI() = default;
+ /**
+ * @brief Creates a ROI objects with given parameters
+ * @param id ID of a ROI (offset over batch dimension)
+ * @param posX W upper left coordinate of ROI
+ * @param posY H upper left coordinate of ROI
+ * @param sizeX W size of ROI
+ * @param sizeY H size of ROI
+ */
ROI(size_t id, size_t posX, size_t posY, size_t sizeX, size_t sizeY) :
id(id), posX(posX), posY(posY), sizeX(sizeX), sizeY(sizeY) {
}
diff --git a/inference-engine/include/ie_locked_memory.hpp b/inference-engine/include/ie_locked_memory.hpp
index c031f498366..111169ac321 100644
--- a/inference-engine/include/ie_locked_memory.hpp
+++ b/inference-engine/include/ie_locked_memory.hpp
@@ -168,7 +168,7 @@ public:
/**
* @brief Compares stored object with the given one
* @param pointer An pointer to compare with.
- * @return true if objects are equal, false otherwise
+ * @return `true` if objects are equal, `false` otherwise
*/
bool operator==(const T* pointer) const {
// special case with nullptr
@@ -177,8 +177,9 @@ public:
/**
* @brief Compares the object with the one stored in the memory.
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @param lm A compared LockedMemory object
+ * @return `true` if objects are equal, `false` otherwise
*/
friend bool operator==(const T* pointer, const LockedMemory& lm) {
return lm.operator==(pointer);
@@ -266,8 +267,8 @@ public:
/**
* @brief Compares stored object with the given one
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @return `true` if objects are equal, `false` otherwise
*/
bool operator==(const void* pointer) const {
// special case with nullptr
@@ -276,8 +277,9 @@ public:
/**
* @brief Compares the object with the one stored in the memory
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @param lm A compared LockedMemory object
+ * @return `true` if objects are equal, `false` otherwise
*/
friend bool operator==(const void* pointer, const LockedMemory& lm) {
return lm.operator==(pointer);
@@ -362,8 +364,8 @@ public:
/**
* @brief Compares stored object with the given one
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @return `true` if objects are equal, `false` otherwise
*/
bool operator==(const T* pointer) const {
// special case with nullptr
@@ -372,8 +374,9 @@ public:
/**
* @brief Compares the object with the one stored in the memory
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @param lm A compared LockedMemory object
+ * @return `true` if objects are equal, `false` otherwise
*/
friend bool operator==(const T* pointer, const LockedMemory& lm) {
return lm.operator==(pointer);
diff --git a/inference-engine/include/ie_precision.hpp b/inference-engine/include/ie_precision.hpp
index 8d13a4bab04..178e67e0d0e 100644
--- a/inference-engine/include/ie_precision.hpp
+++ b/inference-engine/include/ie_precision.hpp
@@ -60,7 +60,10 @@ public:
/** @brief Default constructor */
Precision() = default;
- /** @brief Constructor with specified precision */
+ /**
+ * @brief Constructor with specified precision
+ * @param value A value of ePrecision to create an object from
+ */
Precision(const Precision::ePrecision value) { // NOLINT
precisionInfo = getPrecisionInfo(value);
}
@@ -69,7 +72,7 @@ public:
* @brief Custom precision constructor
*
* @param bitsSize size of elements
- * @param name optional name string, used in serialisation
+ * @param name optional: name string, used in serialisation
*/
explicit Precision(size_t bitsSize, const char* name = nullptr) {
if (bitsSize == 0) {
@@ -131,39 +134,64 @@ public:
}
}
- /** @brief Equality operator with Precision object */
+ /**
+ * @brief Equality operator with Precision object
+ * @param p A value of Precision to compare with
+ * @return `true` if values represent the same precisions, `false` otherwise
+ */
bool operator==(const Precision& p) const noexcept {
return precisionInfo.value == p && precisionInfo.bitsSize == p.precisionInfo.bitsSize &&
areSameStrings(precisionInfo.name, p.precisionInfo.name);
}
- /** @brief Equality operator with ePrecision enum value */
+ /**
+ * @brief Equality operator with ePrecision enum value
+ * @param p A value of ePrecision to compare with
+ * @return `true` if values represent the same precisions, `false` otherwise
+ */
bool operator==(const ePrecision p) const noexcept {
return precisionInfo.value == p;
}
- /** @brief Inequality operator with ePrecision enum value */
+ /**
+ * @brief Inequality operator with ePrecision enum value
+ * @param p A value of ePrecision to compare with
+ * @return `true` if values represent different precisions, `false` otherwise
+ */
bool operator!=(const ePrecision p) const noexcept {
return precisionInfo.value != p;
}
- /** @brief Assignment operator with ePrecision enum value */
+ /**
+ * @brief Assignment operator with ePrecision enum value
+ * @param p A value of ePrecision enumeration
+ * @return A Precision instance
+ */
Precision& operator=(const ePrecision p) noexcept {
precisionInfo = getPrecisionInfo(p);
return *this;
}
- /** @brief Cast operator to a bool */
+ /**
+ * @brief Cast operator to a bool
+ * @return `true` if precision is specified, `false` otherwise
+ */
explicit operator bool() const noexcept {
return precisionInfo.value != UNSPECIFIED;
}
- /** @brief Logical negation operator */
+ /**
+ * @brief Logical negation operator
+ * @return `true` if precision is NOT specified, `false` otherwise
+ */
bool operator!() const noexcept {
return precisionInfo.value == UNSPECIFIED;
}
- /** @brief Cast operator to a ePrecision */
+ /**
+ * @brief Cast operator to a ePrecision
+ * @return A casted value of Precision::ePrecision enumeration
+ */
operator Precision::ePrecision() const noexcept {
return precisionInfo.value;
}
@@ -176,12 +204,19 @@ public:
return precisionInfo.value;
}
- /** @brief Getter of precision name */
+ /**
+ * @brief Getter of precision name
+ * @return A string representing precision name
+ */
const char* name() const noexcept {
return precisionInfo.name;
}
- /** @brief Creates from string with precision name */
+ /**
+ * @brief Creates Precision from string with precision name
+ * @param str A string representing precision
+ * @return Precision created from string representation
+ */
static Precision FromStr(const std::string& str) {
static std::unordered_map names = {
#define PRECISION_NAME(s) {#s, s}
@@ -256,7 +291,9 @@ protected:
}
/**
- * @brief Return PrecisionInfo
+ * @brief Creates PrecisionInfo based on ePrecision
+ * @param v A value of ePrecision emuneration
+ * @return Precision info object
*/
static PrecisionInfo getPrecisionInfo(ePrecision v) {
#define CASE(x) \
diff --git a/inference-engine/include/ie_unicode.hpp b/inference-engine/include/ie_unicode.hpp
index 5f1583df5cd..7a23f48bb25 100644
--- a/inference-engine/include/ie_unicode.hpp
+++ b/inference-engine/include/ie_unicode.hpp
@@ -29,6 +29,8 @@ namespace InferenceEngine {
/**
* @deprecated Use OS-native conversion utilities
* @brief Conversion from possibly-wide character string to a single-byte chain.
+ * @param str A possibly-wide character string
+ * @return A single-byte character string
*/
INFERENCE_ENGINE_DEPRECATED("Use OS-native conversion utilities")
inline std::string fileNameToString(const file_name_t& str) {
@@ -47,6 +49,8 @@ inline std::string fileNameToString(const file_name_t& str) {
/**
* @deprecated Use OS-native conversion utilities
* @brief Conversion from single-byte character string to a possibly-wide one
+ * @param str A single-byte character string
+ * @return A possibly-wide character string
*/
INFERENCE_ENGINE_DEPRECATED("Use OS-native conversion utilities")
inline file_name_t stringToFileName(const std::string& str) {
diff --git a/inference-engine/include/ie_version.hpp b/inference-engine/include/ie_version.hpp
index 89835ba7932..b81a7c38cc7 100644
--- a/inference-engine/include/ie_version.hpp
+++ b/inference-engine/include/ie_version.hpp
@@ -23,8 +23,8 @@ struct Version {
* @brief An API version reflects the set of supported features
*/
struct {
- int major;
- int minor;
+ int major; //!< A major version
+ int minor; //!< A minor version
} apiVersion;
/**
* @brief A null terminated string with build number
diff --git a/inference-engine/src/gna_plugin/gna_plugin.hpp b/inference-engine/src/gna_plugin/gna_plugin.hpp
index 838e9046b8d..d5b9e29806d 100644
--- a/inference-engine/src/gna_plugin/gna_plugin.hpp
+++ b/inference-engine/src/gna_plugin/gna_plugin.hpp
@@ -13,7 +13,7 @@
#include
#include
#include
-#include "cpp_interfaces/impl/ie_memory_state_internal.hpp"
+#include "cpp_interfaces/impl/ie_variable_state_internal.hpp"
#include "descriptions/gna_flags.hpp"
#include "descriptions/gna_input_desc.hpp"
#include "descriptions/gna_output_desc.hpp"
diff --git a/inference-engine/src/gna_plugin/memory/gna_memory_state.hpp b/inference-engine/src/gna_plugin/memory/gna_memory_state.hpp
index 2a7c83d6dae..2fc0b30c3f6 100644
--- a/inference-engine/src/gna_plugin/memory/gna_memory_state.hpp
+++ b/inference-engine/src/gna_plugin/memory/gna_memory_state.hpp
@@ -6,7 +6,7 @@
#include
#include
-#include
+#include
#include "gna_plugin.hpp"
namespace GNAPluginNS {
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_memory_state.h b/inference-engine/src/mkldnn_plugin/mkldnn_memory_state.h
index 751635b7709..999ff269783 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_memory_state.h
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_memory_state.h
@@ -4,7 +4,7 @@
#pragma once
-#include "cpp_interfaces/impl/ie_memory_state_internal.hpp"
+#include "cpp_interfaces/impl/ie_variable_state_internal.hpp"
#include "mkldnn_memory.h"
#include
diff --git a/inference-engine/src/plugin_api/caseless.hpp b/inference-engine/src/plugin_api/caseless.hpp
index b9b08046fe8..0c030053bb6 100644
--- a/inference-engine/src/plugin_api/caseless.hpp
+++ b/inference-engine/src/plugin_api/caseless.hpp
@@ -3,7 +3,8 @@
//
/**
- * @file A header file with caseless containers
+ * @file caseless.hpp
+ * @brief A header file with caseless containers
*/
#pragma once
diff --git a/inference-engine/src/plugin_api/cpp_interfaces/base/ie_executable_network_base.hpp b/inference-engine/src/plugin_api/cpp_interfaces/base/ie_executable_network_base.hpp
index b9d7833e357..c195af78193 100644
--- a/inference-engine/src/plugin_api/cpp_interfaces/base/ie_executable_network_base.hpp
+++ b/inference-engine/src/plugin_api/cpp_interfaces/base/ie_executable_network_base.hpp
@@ -10,8 +10,8 @@
#pragma once
#include
-#include
-#include
+#include
+#include
#include