DOCS: Update doxygen version (#15210)
* Update build_doc.yml * fixing references * fix refs * fix branch.hpp
This commit is contained in:
committed by
GitHub
parent
326e03504a
commit
ffdf31fba8
@@ -23,7 +23,7 @@ By default, the application will load the specified model onto the CPU and perfo
|
||||
You may be able to improve benchmark results beyond the default configuration by configuring some of the execution parameters for your model. For example, you can use "throughput" or "latency" performance hints to optimize the runtime for higher FPS or reduced inferencing time. Read on to learn more about the configuration options available with benchmark_app.
|
||||
|
||||
## Configuration Options
|
||||
The benchmark app provides various options for configuring execution parameters. This section covers key configuration options for easily tuning benchmarking to achieve better performance on your device. A list of all configuration options is given in the [Advanced Usage](#advanced-usage) section.
|
||||
The benchmark app provides various options for configuring execution parameters. This section covers key configuration options for easily tuning benchmarking to achieve better performance on your device. A list of all configuration options is given in the [Advanced Usage](#advanced-usage-cpp-benchmark) section.
|
||||
|
||||
### Performance hints: latency and throughput
|
||||
The benchmark app allows users to provide high-level "performance hints" for setting latency-focused or throughput-focused inference modes. This hint causes the runtime to automatically adjust runtime parameters, such as the number of processing streams and inference batch size, to prioritize for reduced latency or high throughput.
|
||||
@@ -87,9 +87,9 @@ The benchmark tool runs benchmarking on user-provided input images in `.jpg`, `.
|
||||
The tool will repeatedly loop through the provided inputs and run inferencing on them for the specified amount of time or number of iterations. If the `-i` flag is not used, the tool will automatically generate random data to fit the input shape of the model.
|
||||
|
||||
### Examples
|
||||
For more usage examples (and step-by-step instructions on how to set up a model for benchmarking), see the [Examples of Running the Tool](#examples-of-running-the-tool) section.
|
||||
For more usage examples (and step-by-step instructions on how to set up a model for benchmarking), see the [Examples of Running the Tool](#examples-of-running-the-tool-cpp) section.
|
||||
|
||||
## Advanced Usage
|
||||
## <a name="advanced-usage-cpp-benchmark"></a> Advanced Usage
|
||||
|
||||
> **NOTE**: By default, OpenVINO samples, tools and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channel order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model to Intermediate Representation (IR).
|
||||
|
||||
@@ -102,7 +102,7 @@ The application also collects per-layer Performance Measurement (PM) counters fo
|
||||
|
||||
Depending on the type, the report is stored to benchmark_no_counters_report.csv, benchmark_average_counters_report.csv, or benchmark_detailed_counters_report.csv file located in the path specified in -report_folder. The application also saves executable graph information serialized to an XML file if you specify a path to it with the -exec_graph_path parameter.
|
||||
|
||||
### <a name="all-configuration-options"></a> All configuration options
|
||||
### <a name="all-configuration-options-cpp-benchmark"></a> All configuration options
|
||||
|
||||
Running the application with the `-h` or `--help` option yields the following usage message:
|
||||
|
||||
@@ -197,7 +197,7 @@ Running the application with the empty list of options yields the usage message
|
||||
### More information on inputs
|
||||
The benchmark tool supports topologies with one or more inputs. If a topology is not data sensitive, you can skip the input parameter, and the inputs will be filled with random values. If a model has only image input(s), provide a folder with images or a path to an image as input. If a model has some specific input(s) (besides images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to it as input. If a model has mixed input types, the input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
|
||||
|
||||
## Examples of Running the Tool
|
||||
## <a name="examples-of-running-the-tool-cpp"></a> Examples of Running the Tool
|
||||
This section provides step-by-step instructions on how to run the Benchmark Tool with the `asl-recognition` model from the [Open Model Zoo](@ref model_zoo) on CPU or GPU devices. It uses random data as the input.
|
||||
|
||||
> **NOTE**: Internet access is required to execute the following steps successfully. If you have access to the Internet through a proxy server only, please make sure that it is configured in your OS environment.
|
||||
@@ -294,7 +294,7 @@ An example of the information output when running benchmark_app on CPU in latenc
|
||||
[ INFO ] Max: 37.19 ms
|
||||
[ INFO ] Throughput: 91.12 FPS
|
||||
```
|
||||
The Benchmark Tool can also be used with dynamically shaped networks to measure expected inference time for various input data shapes. See the `-shape` and `-data_shape` argument descriptions in the <a href="#all-configuration-options">All configuration options</a> section to learn more about using dynamic shapes. Here is a command example for using benchmark_app with dynamic networks and a portion of the resulting output:
|
||||
The Benchmark Tool can also be used with dynamically shaped networks to measure expected inference time for various input data shapes. See the `-shape` and `-data_shape` argument descriptions in the <a href="#all-configuration-options-cpp-benchmark">All configuration options</a> section to learn more about using dynamic shapes. Here is a command example for using benchmark_app with dynamic networks and a portion of the resulting output:
|
||||
|
||||
```sh
|
||||
./benchmark_app -m omz_models/intel/asl-recognition-0004/FP16/asl-recognition-0004.xml -d CPU -shape [-1,3,16,224,224] -data_shape [1,3,16,224,224][2,3,16,224,224][4,3,16,224,224] -pcseq
|
||||
|
||||
@@ -21,9 +21,9 @@ Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](..
|
||||
|
||||
| Options | Values |
|
||||
| :--- | :--- |
|
||||
| Validated Models | Acoustic model based on Kaldi\* neural networks (see [Model Preparation](#model-preparation) section) |
|
||||
| Validated Models | Acoustic model based on Kaldi\* neural networks (see [Model Preparation](#model-preparation-speech) section) |
|
||||
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin) |
|
||||
| Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
|
||||
| Supported devices | See [Execution Modes](#execution-modes-speech) section below and [List Supported Devices](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
@@ -52,7 +52,7 @@ network.
|
||||
>
|
||||
> - It is not always possible to use 8-bit weights due to GNA hardware limitations. For example, convolutional layers always use 16-bit weights (GNA hardware version 1 and 2). This limitation will be removed in GNA hardware version 3 and higher.
|
||||
|
||||
#### Execution Modes
|
||||
#### <a name="execution-modes-speech"></a> Execution Modes
|
||||
|
||||
Several execution modes are supported via the `-d` flag:
|
||||
|
||||
@@ -122,7 +122,7 @@ Options:
|
||||
Available target devices: CPU GNA GPU VPUX
|
||||
```
|
||||
|
||||
### Model Preparation
|
||||
### <a name="model-preparation-speech"></a> Model Preparation
|
||||
|
||||
You can use the following model optimizer command to convert a Kaldi nnet1 or nnet2 neural model to OpenVINO™ toolkit Intermediate Representation format:
|
||||
|
||||
@@ -216,7 +216,7 @@ Kaldi's nnet-forward command. Since the `speech_sample` does not yet use pipes,
|
||||
./speech_sample -d GNA_AUTO -bs 8 -i feat.ark -m wsj_dnn5b.xml -o scores.ark
|
||||
```
|
||||
|
||||
OpenVINO™ toolkit Intermediate Representation `wsj_dnn5b.xml` file was generated in the previous [Model Preparation](#model-preparation) section.
|
||||
OpenVINO™ toolkit Intermediate Representation `wsj_dnn5b.xml` file was generated in the previous [Model Preparation](#model-preparation-speech) section.
|
||||
|
||||
3. Run the Kaldi decoder to produce n-best text hypotheses and select most likely text given the WFST (`HCLG.fst`), vocabulary (`words.txt`), and TID/PID mapping (`final.mdl`):
|
||||
```sh
|
||||
|
||||
Reference in New Issue
Block a user