DOCS: Update doxygen version (#15210)

* Update build_doc.yml

* fixing references

* fix refs

* fix branch.hpp
This commit is contained in:
Sebastian Golebiewski 2023-01-20 10:22:30 +01:00 committed by GitHub
parent 326e03504a
commit ffdf31fba8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
35 changed files with 104 additions and 107 deletions

View File

@ -7,7 +7,7 @@ on:
- 'releases/**'
env:
DOXY_VER: '1.9.2'
DOXY_VER: '1.9.6'
DOXYREST_VER: '2.1.3'
concurrency:
@ -51,7 +51,7 @@ jobs:
tar -xf doxyrest-$DOXYREST_VER-linux-amd64.tar.xz
echo "$(pwd)/doxyrest-$DOXYREST_VER-linux-amd64/bin/" >> $GITHUB_PATH
# install doxygen
wget https://sourceforge.net/projects/doxygen/files/rel-$DOXY_VER/doxygen-$DOXY_VER.linux.bin.tar.gz
wget https://www.doxygen.nl/files/doxygen-$DOXY_VER.linux.bin.tar.gz
tar -xzf doxygen-$DOXY_VER.linux.bin.tar.gz
echo "$(pwd)/doxygen-$DOXY_VER/bin/" >> $GITHUB_PATH

View File

@ -71,16 +71,16 @@ For example, if you would like to infer a model with `Convolution` operation in
> There are several supported quantization approaches on activations and on weights. All supported approaches are described in [Quantization approaches](#quantization-approaches) section below. In demonstrated model [FakeQuantize operation quantization](#fakequantize-operation) approach is used.
### Low precision tools
### <a name="low-precision-tools"></a> Low precision tools
For more details on how to get a quantized model, refer to [Model Optimization](@ref openvino_docs_model_optimization_guide) document.
## Quantization approaches
## <a name="quantization-approaches"></a> Quantization approaches
LPT transformations support two quantization approaches:
1. `FakeQuantize` operation,
2. Quantize and dequantization operations
Let's explore both approaches in details on `Convolution` operation.
### FakeQuantize operation
### <a name="fakequantize-operation"></a> FakeQuantize operation
In this case `FakeQuantize` operation is used on activations and quantized constant on weights. Original input model:
![Original model with FakeQuantize](img/model_fq_and_convolution.common.png)

View File

@ -4,7 +4,7 @@ This page provides general instructions on how to convert a model from a TensorF
To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](@ref openvino_docs_install_guides_install_dev_tools).
## Converting TensorFlow 1 Models <a name="Convert_From_TF2X"></a>
## Converting TensorFlow 1 Models <a name="Convert_From_TF1X"></a>
### Converting Frozen Model Format <a name="Convert_From_TF"></a>
To convert a TensorFlow model, use the *`mo`* script to simply convert a model with a path to the input model *`.pb`* file:

View File

@ -4,7 +4,7 @@ You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model fo
The PyTorch implementation is publicly available in [this GitHub repository](https://github.com/dbolya/yolact).
The YOLACT++ model is not supported, because it uses deformable convolutional layers that cannot be represented in ONNX format.
## Creating a Patch File <a name="patch-file"></a>
## Creating a Patch File <a name="patch-file-yolact"></a>
Before converting the model, create a patch file for the repository.
The patch modifies the framework code by adding a special command-line argument to the framework options. The argument enables inference graph dumping:
@ -142,7 +142,7 @@ git checkout 57b8f2d95e62e2e649b382f516ab41f949b57239
**Step 3**. Export the model to ONNX format.
1. Apply the `YOLACT_onnx_export.patch` patch to the repository. Refer to the <a href="#patch-file">Create a Patch File</a> instructions if you do not have it:
1. Apply the `YOLACT_onnx_export.patch` patch to the repository. Refer to the <a href="#patch-file-yolact">Create a Patch File</a> instructions if you do not have it:
```sh
git apply /path/to/patch/YOLACT_onnx_export.patch
```

View File

@ -4,7 +4,7 @@ This tutorial explains how to convert Google Neural Machine Translation (GNMT) m
There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the [TensorFlow Neural Machine Translation (NMT) repository](https://github.com/tensorflow/nmt) to the IR.
## Creating a Patch File <a name="patch-file"></a>
## Creating a Patch File <a name="patch-file-gnmt"></a>
Before converting the model, you need to create a patch file for the repository. The patch modifies the framework code by adding a special command-line argument to the framework options that enables inference graph dumping:
@ -164,7 +164,7 @@ This tutorial assumes the use of the trained GNMT model from `wmt16_gnmt_4_layer
The OpenVINO assumes that a model is used for inference only. Hence, before converting the model into the IR, you need to transform the training graph into the inference graph.
For the GNMT model, the training graph and the inference graph have different decoders: the training graph uses a greedy search decoding algorithm, while the inference graph uses a beam search decoding algorithm.
1. Apply the `GNMT_inference.patch` patch to the repository. Refer to the <a href="#patch-file">Create a Patch File</a> instructions if you do not have it:
1. Apply the `GNMT_inference.patch` patch to the repository. Refer to the <a href="#patch-file-gnmt">Create a Patch File</a> instructions if you do not have it:
```sh
git apply /path/to/patch/GNMT_inference.patch
```
@ -217,7 +217,7 @@ Output cutting:
For more information about model cutting, refer to the [Cutting Off Parts of a Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model) guide.
## Using a GNMT Model <a name="run_GNMT"></a>
## Using a GNMT Model <a name="run_GNMT_model"></a>
> **NOTE**: This step assumes you have converted a model to the Intermediate Representation.

View File

@ -102,7 +102,7 @@ The next step is to parse framework-dependent operation representation saved in
attributes with the operation specific attributes. There are three options to do this.
1. The extractor extension approach. This is a recommended way to extract attributes for an operation and it is
explained in details in the [Operation Extractor](#extension-extractor) section.
explained in details in the [Operation Extractor](#operation-extractor) section.
2. The legacy approach with a built-in extractor. The `mo/front/<FRAMEWORK>/extractor.py` file (for example, the one
for Caffe) defines a dictionary with extractors for specific operation types. A key in the dictionary is a type of an
@ -586,7 +586,7 @@ only parameter and returns a string with the value to be saved to the IR. Exampl
second element is the name of the `Node` attribute to get the value from. Examples of this case are `pool-method` and
`exclude-pad`.
### Operation Extractor <a name="extension-extractor"></a>
### Operation Extractor <a name="operation-extractor"></a>
Model Optimizer runs specific extractor for each operation in the model during the model loading. For more information about this process, refer to the
[operations-attributes-extracting](#operations-attributes-extracting) section.
@ -737,7 +737,7 @@ sub-graph of the original graph isomorphic to the specified pattern.
node with a specific `op` attribute value.
3. [Generic Front Phase Transformations](#generic-front-phase-transformations).
4. Manually enabled transformation, defined with a JSON configuration file (for TensorFlow, ONNX, Apache MXNet, and PaddlePaddle models), specified using the `--transformations_config` command-line parameter:
1. [Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformation).
1. [Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformations).
2. [Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations).
3. [Generic Front Phase Transformations Enabled with Transformations Configuration File](#generic-transformations-config-front-phase-transformations).
@ -755,8 +755,6 @@ works differently:
required to write the transformation and connect the newly created nodes to the rest of the graph.
2. The `generate_sub_graph(self, graph, match)` override the method. This case is not recommended for use because it is
the most complicated approach. It can be effectively replaced with one of two previous approaches.
The explanation of this function is provided in the
[Node Name Defined Sub-Graph Transformations](#node-name-defined-sub-graph-transformations) section.
The sub-graph pattern is defined in the `pattern()` function. This function should return a dictionary with two keys:
`nodes` and `edges`:
@ -1135,7 +1133,7 @@ For other examples of transformations with points, refer to the
##### Generic Front Phase Transformations Enabled with Transformations Configuration File <a name="generic-transformations-config-front-phase-transformations"></a>
This type of transformation works similarly to the [Generic Front Phase Transformations](#generic-front-phase-transformations)
but require a JSON configuration file to enable it similarly to
[Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformation) and
[Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformations) and
[Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations).
The base class for this type of transformation is

View File

@ -12,7 +12,7 @@
This article introduces how Automatic Device Selection works and how to use it for inference.
## How AUTO Works
## <a name="how-auto-works"></a> How AUTO Works
The Automatic Device Selection mode, or AUTO for short, uses a "virtual" or a "proxy" device,
which does not bind to a specific type of hardware, but rather selects the processing unit for inference automatically.
@ -287,7 +287,7 @@ Although the methods described above are currently the preferred way to execute
@endsphinxdirective
## Using AUTO with OpenVINO Samples and Benchmark app
## <a name="using-auto-with-openvino-samples-and-benchmark-app"></a> Using AUTO with OpenVINO Samples and Benchmark app
To see how the Auto-Device plugin is used in practice and test its performance, take a look at OpenVINO™ samples. All samples supporting the "-d" command-line option (which stands for "device") will accept the plugin out-of-the-box. The Benchmark Application will be a perfect place to start it presents the optimal performance of the plugin without the need for additional settings, like the number of requests or CPU threads. To evaluate the AUTO performance, you can use the following commands:

View File

@ -82,12 +82,12 @@ Notice that MULTI allows you to **change device priorities on the fly**. You can
One more thing you can define is the **number of requests to allocate for each device**. You can do it simply by adding the number to each device in parentheses, like this: `"MULTI:CPU(2),GPU(2)"`. However, this method is not recommended as it is not performance-portable. The suggested approach is to configure individual devices and query the resulting number of requests to be used at the application level, as described in [Configuring Individual Devices and Creating MULTI On Top](#configuring-the-individual-devices-and-creating-the-multi-device-on-top).
One more thing you can define is the **number of requests to allocate for each device**. You can do it simply by adding the number to each device in parentheses, like this: `"MULTI:CPU(2),GPU(2)"`. However, this method is not recommended as it is not performance-portable. The suggested approach is to configure individual devices and query the resulting number of requests to be used at the application level, as described in [Configuring Individual Devices and Creating MULTI On Top](#config-multi-on-top).
To check what devices are present in the system, you can use the Device API. For information on how to do it, check [Query device properties and configuration](supported_plugins/config_properties.md).
### Configuring Individual Devices and Creating the Multi-Device On Top
### <a name="config-multi-on-top"></a> Configuring Individual Devices and Creating the Multi-Device On Top
As mentioned previously, executing inference with MULTI may be set up by configuring individual devices before creating the "MULTI" device on top. It may be considered for performance reasons.
@sphinxdirective

View File

@ -108,7 +108,7 @@ For setting relevant configuration, refer to the
[Integrate with Customer Application](../integrate_with_your_application.md) topic
(step 3 "Configure input and output").
### Supported Layers
### <a name="supported-layers"></a> Supported Layers
The following layers are supported by the plugins:
| Layers | GPU | CPU | VPU | GNA | Arm® CPU |

View File

@ -34,7 +34,7 @@
Try out OpenVINO's capabilities with this quick start example that estimates depth in a scene using an OpenVINO monodepth model. <a href="https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F201-vision-monodepth%2F201-vision-monodepth.ipynb">Run the example in a Jupyter Notebook inside your web browser</a> to quickly see how to load a model, prepare an image, inference the image, and display the result.
## <a name="install-openvino"></a>2. Install OpenVINO
## <a name="install-openvino-gsg"></a>2. Install OpenVINO
See the [installation overview page](./install_guides/installing-openvino-overview.md) for options to install OpenVINO and set up a development environment on your device.

View File

@ -2,7 +2,7 @@
The guide presents a basic workflow for building and running C++ code samples in OpenVINO. Note that these steps will not work with the Python samples.
To get started, you must first install OpenVINO Runtime, install OpenVINO Development tools, and build the sample applications. See the <a href="#prerequisites">Prerequisites</a> section for instructions.
To get started, you must first install OpenVINO Runtime, install OpenVINO Development tools, and build the sample applications. See the <a href="#prerequisites-samples">Prerequisites</a> section for instructions.
Once the prerequisites have been installed, perform the following steps:
@ -11,7 +11,7 @@ Once the prerequisites have been installed, perform the following steps:
3. <a href="#download-media">Download media files to run inference.</a>
4. <a href="#run-image-classification">Run inference with the Image Classification sample application and see the results.</a>
## <a name="prerequisites"></a>Prerequisites
## <a name="prerequisites-samples"></a>Prerequisites
### Install OpenVINO Runtime

View File

@ -30,7 +30,7 @@ for model training or creation; or installation into a new environment.
### Installation into an Existing Environment with the Source Deep Learning Framework
To install OpenVINO Development Tools (see the [What's in the Package](#whats-in-the-package) section of this article) into an existing environment
To install OpenVINO Development Tools (see the [Install the Package](#install-the-package) section of this article) into an existing environment
with the deep learning framework used for the model training or creation, run the following command:
```sh
@ -96,7 +96,7 @@ Make sure `pip` is installed in your environment and upgrade it to the latest ve
python -m pip install --upgrade pip
```
#### Step 4. Install the Package
#### Step 4. <a name="install-the-package"></a> Install the Package
To install and configure the components of the development package together with validated versions of specific frameworks, use the commands below.

View File

@ -2,7 +2,7 @@
This guide provides steps on creating a Docker image with Intel® Distribution of OpenVINO™ toolkit for Linux and using the image on different devices.
## <a name="system-requirments"></a>System Requirements
## <a name="system-requirements-docker-linux"></a>System Requirements
@sphinxdirective
.. tab:: Target Operating Systems with Python Versions
@ -37,16 +37,16 @@ This guide provides steps on creating a Docker image with Intel® Distribution o
There are two ways to install OpenVINO with Docker. You can choose either of them according to your needs:
* Use a prebuilt image. Do the following steps:
1. <a href="#get-prebuilt-image">Get a prebuilt image from provided sources</a>.
2. <a href="#run-image">Run the image on different devices</a>.
3. <a href="#run-samples">(Optional) Run samples in the Docker image</a>.
1. <a href="#get-prebuilt-image-docker-linux">Get a prebuilt image from provided sources</a>.
2. <a href="#run-image-docker-linux">Run the image on different devices</a>.
3. <a href="#run-samples-docker-linux">(Optional) Run samples in the Docker image</a>.
* If you want to customize your image, you can also build a Docker image manually by using the following steps:
1. <a href="#prepare-dockerfile">Prepare a Dockerfile</a>.
2. <a href="#configure-image">Configure the Docker image</a>.
3. <a href="#run-image">Run the image on different devices</a>.
4. <a href="#run-samples">(Optional) Run samples in the Docker image</a>.
1. <a href="#prepare-dockerfile-linux">Prepare a Dockerfile</a>.
2. <a href="#configure-image-docker-linux">Configure the Docker image</a>.
3. <a href="#run-image-docker-linux">Run the image on different devices</a>.
4. <a href="#run-samples-docker-linux">(Optional) Run samples in the Docker image</a>.
## <a name="get-prebuilt-image"></a>Getting a Prebuilt Image from Provided Sources
## <a name="get-prebuilt-image-docker-linux"></a>Getting a Prebuilt Image from Provided Sources
You can find prebuilt images on:
@ -56,14 +56,14 @@ You can find prebuilt images on:
- [Red Hat Ecosystem Catalog (development image)](https://catalog.redhat.com/software/containers/intel/openvino-dev/613a450dc9bc35f21dc4a1f7)
- [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/intel_corporation.openvino)
## <a name="prepare-dockerfile"></a>Preparing a Dockerfile
## <a name="prepare-dockerfile-linux"></a>Preparing a Dockerfile
You can use the [available Dockerfiles on GitHub](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles) or generate a Dockerfile with your settings via [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) which can generate a Dockerfile, build, test and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
You can also try our [Tutorials](https://github.com/openvinotoolkit/docker_ci/tree/master/docs/tutorials) which demonstrate the usage of Docker containers with OpenVINO.
## <a name="configure-image"></a>Configuring the Image for Different Devices
## <a name="configure-image-docker-linux"></a>Configuring the Image for Different Devices
If you want to run inferences on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to <a href="#run-image">Running the image on different devices</a> for the next step.
If you want to run inferences on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.
### Configuring Docker Image for GPU
@ -112,7 +112,7 @@ RUN yum update -y && yum install -y https://dl.fedoraproject.org/pub/epel/epel-r
yum remove -y epel-release
```
## <a name="run-image"></a>Running the Docker Image on Different Devices
## <a name="run-image-docker-linux"></a>Running the Docker Image on Different Devices
### Running the Image on CPU

View File

@ -2,7 +2,7 @@
This guide provides steps for creating a Docker image with Intel® Distribution of OpenVINO™ toolkit for Windows and using the Docker image on different devices.
## <a name="system-requirments"></a>System Requirements
## <a name="system-requirements-docker-windows"></a>System Requirements
@sphinxdirective
.. tab:: Target Operating System with Python Versions
@ -40,25 +40,25 @@ To use GPU Acceleration in Windows containers, make sure that the following requ
There are two ways to install OpenVINO with Docker. You can choose either of them according to your needs:
* Use a prebuilt image. Do the following steps:
1. <a href="#get-prebuilt-image">Get a prebuilt image from provided sources</a>.
2. <a href="#run-image">Run the image on different devices</a>.
1. <a href="#get-prebuilt-image-docker-windows">Get a prebuilt image from provided sources</a>.
2. <a href="#run-image-docker-windows">Run the image on different devices</a>.
* If you want to customize your image, you can also build a Docker image manually by using the following steps:
1. <a href="#prepare-dockerfile">Prepare a Dockerfile</a>.
2. <a href="#configure-image">Configure the Docker image</a>.
3. <a href="#run-image">Run the image on different devices</a>.
1. <a href="#prepare-dockerfile-windows">Prepare a Dockerfile</a>.
2. <a href="#configure-image-docker-windows">Configure the Docker image</a>.
3. <a href="#run-image-docker-windows">Run the image on different devices</a>.
## <a name="get-prebuilt-image"></a>Getting a Prebuilt Image from Provided Sources
## <a name="get-prebuilt-image-docker-windows"></a>Getting a Prebuilt Image from Provided Sources
You can find prebuilt images on:
- [Docker Hub](https://hub.docker.com/u/openvino)
- [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/intel_corporation.openvino)
## <a name="prepare-dockerfile"></a>Preparing a Dockerfile
## <a name="prepare-dockerfile-windows"></a>Preparing a Dockerfile
You can use the [available Dockerfiles on GitHub](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles) or generate a Dockerfile with your settings via [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) which can generate a Dockerfile, build, test and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
## <a name="configure-image"></a>Configuring the Docker Image for Different Devices
## <a name="configure-image-docker-windows"></a>Configuring the Docker Image for Different Devices
### Installing Additional Dependencies for CPU
@ -107,7 +107,7 @@ You can use the [available Dockerfiles on GitHub](https://github.com/openvinotoo
### <a name="config-image-for-gpu"></a>Configuring the Image for GPU
> **NOTE**: Since GPU is not supported in <a href="#get-prebuilt-image">prebuilt images</a> or [default Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles), you must make sure the Additional Requirements for GPU in <a href="#system-requirements">System Requirements</a> are met, and do the following steps to build the image manually.
> **NOTE**: Since GPU is not supported in <a href="#get-prebuilt-image-docker-windows">prebuilt images</a> or [default Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles), you must make sure the Additional Requirements for GPU in <a href="#system-requirements">System Requirements</a> are met, and do the following steps to build the image manually.
1. Reuse one of [available Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles). You can also use your own Dockerfile.
2. Check your [Windows host and container isolation process compatibility](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility).
@ -130,7 +130,7 @@ You can use the [available Dockerfiles on GitHub](https://github.com/openvinotoo
copy C:\Windows\System32\OpenCL.dll C:\tmp
```
## <a name="run-image"></a>Running the Docker Image on Different Devices
## <a name="run-image-docker-windows"></a>Running the Docker Image on Different Devices
### Running the Image on CPU
@ -147,7 +147,7 @@ cmd /S /C "omz_downloader --name googlenet-v1 --precisions FP16 && omz_converter
### Running the Image on GPU
> **NOTE**: Since GPU is not supported in <a href="#get-prebuilt-image">prebuilt images</a> or [default Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles), you must make sure the Additional Requirements for GPU in <a href="#system-requirements">System Requirements</a> are met, and <a href="#config-image-for-gpu">configure and build the image manually</a> before you can run inferences on a GPU.
> **NOTE**: Since GPU is not supported in <a href="#get-prebuilt-image-docker-windows">prebuilt images</a> or [default Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles), you must make sure the Additional Requirements for GPU in <a href="#system-requirements">System Requirements</a> are met, and <a href="#config-image-for-gpu">configure and build the image manually</a> before you can run inferences on a GPU.
1. To try inference on a GPU, run the image with the following command:

View File

@ -32,7 +32,7 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
## Installing OpenVINO Runtime
### <a name="install-openvino"></a>Step 1: Download and Install the OpenVINO Core Components
### <a name="install-openvino-archive-linux"></a>Step 1: Download and Install the OpenVINO Core Components
@sphinxdirective
@ -107,7 +107,7 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
Congratulations, you finished the installation! The `/opt/intel/openvino_2022` folder now contains the core components for OpenVINO. If you used a different path in Step 2, for example, `/home/<USER>/Intel/`, OpenVINO is then installed in `/home/<USER>/Intel/openvino_2022`. The path to the `openvino_2022` directory is also referred as `<INSTALL_DIR>` throughout the OpenVINO documentation.
### <a name="set-the-environment-variables"></a>Step 2: Configure the Environment
### <a name="set-the-environment-variables-linux"></a>Step 2: Configure the Environment
You must update several environment variables before you can compile and run OpenVINO applications. Open a terminal window and run the `setupvars.sh` script as shown below to temporarily set your environment variables. If your <INSTALL_DIR> is not `/opt/intel/openvino_2022`, use the correct one instead.
@ -121,14 +121,14 @@ If you have more than one OpenVINO version on your machine, you can easily switc
The environment variables are set. Continue to the next section if you want to download any additional components.
### <a name="model-optimizer">Step 3 (Optional): Install Additional Components
### <a name="model-optimizer-linux">Step 3 (Optional): Install Additional Components
OpenVINO Development Tools is a set of utilities for working with OpenVINO and OpenVINO models. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you install OpenVINO Runtime using archive files, OpenVINO Development Tools must be installed separately.
See the [Install OpenVINO Development Tools](installing-model-dev-tools.md) page for step-by-step installation instructions.
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the [instructions on GitHub](https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO).
### <a name="optional-steps"></a>Step 4 (Optional): Configure Inference on Non-CPU Devices
### <a name="optional-steps-linux"></a>Step 4 (Optional): Configure Inference on Non-CPU Devices
OpenVINO Runtime has a plugin architecture that enables you to run inference on multiple devices without rewriting your code. Supported devices include integrated GPUs, discrete GPUs and GNAs. See the instructions below to set up OpenVINO on these devices.
@sphinxdirective
@ -142,7 +142,7 @@ OpenVINO Runtime has a plugin architecture that enables you to run inference on
@endsphinxdirective
## <a name="get-started"></a>What's Next?
## <a name="get-started-linux"></a>What's Next?
Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials.
@sphinxdirective
@ -173,7 +173,7 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
@endsphinxdirective
## <a name="uninstall"></a>Uninstalling the Intel® Distribution of OpenVINO™ Toolkit
## <a name="uninstall-from-linux"></a>Uninstalling the Intel® Distribution of OpenVINO™ Toolkit
To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalling-openvino.md).

View File

@ -79,7 +79,7 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
Congratulations, you finished the installation! The `/opt/intel/openvino_2022` folder now contains the core components for OpenVINO. If you used a different path in Step 2, you will find the `openvino_2022` folder there. The path to the `openvino_2022` directory is also referred as `<INSTALL_DIR>` throughout the OpenVINO documentation.
### <a name="set-the-environment-variables"></a>Step 2: Configure the Environment
### <a name="set-the-environment-variables-macos"></a>Step 2: Configure the Environment
You must update several environment variables before you can compile and run OpenVINO applications. Open a terminal window and run the `setupvars.sh` script as shown below to temporarily set your environment variables. If your <INSTALL_DIR> is not `/opt/intel/openvino_2022`, use the correct one instead.
@ -93,7 +93,7 @@ If you have more than one OpenVINO™ version on your machine, you can easily sw
The environment variables are set. Continue to the next section if you want to download any additional components.
### <a name="model-optimizer"></a>Step 3 (Optional): Install Additional Components
### <a name="model-optimizer-macos"></a>Step 3 (Optional): Install Additional Components
OpenVINO Development Tools is a set of utilities for working with OpenVINO and OpenVINO models. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you install OpenVINO Runtime using archive files, OpenVINO Development Tools must be installed separately.
@ -101,7 +101,7 @@ See the [Install OpenVINO Development Tools](installing-model-dev-tools.md) page
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the [instructions on GitHub](https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO).
## <a name="get-started"></a>What's Next?
## <a name="get-started-macos"></a>What's Next?
Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials.
@sphinxdirective
@ -132,7 +132,7 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
@endsphinxdirective
## <a name="uninstall"></a>Uninstalling Intel® Distribution of OpenVINO™ Toolkit
## <a name="uninstall-from-macos"></a>Uninstalling Intel® Distribution of OpenVINO™ Toolkit
To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalling-openvino.md).

View File

@ -42,7 +42,7 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
## Installing OpenVINO Runtime
### <a name="install-openvino"></a>Step 1: Download and Install OpenVINO Core Components
### <a name="install-openvino-archive-windows"></a>Step 1: Download and Install OpenVINO Core Components
1. Create an `Intel` folder in the `C:\Program Files (x86)\` directory. Skip this step if the folder already exists.
@ -81,7 +81,7 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
Congratulations, you finished the installation! The `C:\Program Files (x86)\Intel\openvino_2022` folder now contains the core components for OpenVINO. If you used a different path in Step 1, you will find the `openvino_2022` folder there. The path to the `openvino_2022` directory is also referred as `<INSTALL_DIR>` throughout the OpenVINO documentation.
### <a name="set-the-environment-variables"></a>Step 2: Configure the Environment
### <a name="set-the-environment-variables-windows"></a>Step 2: Configure the Environment
You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the `setupvars.bat` batch file to temporarily set your environment variables. If your <INSTALL_DIR> is not `C:\Program Files (x86)\Intel\openvino_2022`, use the correct directory instead.
@ -95,7 +95,7 @@ You must update several environment variables before you can compile and run Ope
The environment variables are set. Continue to the next section if you want to download any additional components.
### <a name="model-optimizer">Step 3 (Optional): Install Additional Components</a>
### <a name="model-optimizer-windows">Step 3 (Optional): Install Additional Components</a>
OpenVINO Development Tools is a set of utilities for working with OpenVINO and OpenVINO models. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you install OpenVINO Runtime using archive files, OpenVINO Development Tools must be installed separately.
@ -103,7 +103,7 @@ See the [Install OpenVINO Development Tools](installing-model-dev-tools.md) page
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the [instructions on GitHub](https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO).
### <a name="optional-steps"></a>Step 4 (Optional): Configure Inference on non-CPU Devices
### <a name="optional-steps-windows"></a>Step 4 (Optional): Configure Inference on non-CPU Devices
OpenVINO Runtime has a plugin architecture that enables you to run inference on multiple devices without rewriting your code. Supported devices include integrated GPUs, discrete GPUs and GNAs. See the instructions below to set up OpenVINO on these devices.
@sphinxdirective
@ -117,7 +117,7 @@ OpenVINO Runtime has a plugin architecture that enables you to run inference on
@endsphinxdirective
## <a name="get-started"></a>What's Next?
## <a name="get-started-windows"></a>What's Next?
Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials.
@sphinxdirective
@ -148,7 +148,7 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
@endsphinxdirective
## <a name="uninstall"></a>Uninstalling OpenVINO Runtime
## <a name="uninstall-from-windows"></a>Uninstalling OpenVINO Runtime
To uninstall OpenVINO, follow the steps on the [Uninstalling page](uninstalling-openvino.md).

View File

@ -115,7 +115,7 @@ CMake version 3.10 or higher is required for building the OpenVINO™ toolkit sa
CMake is installed. Continue to the next section to set the environment variables.
.. _set-the-environment-variables:
.. _set-the-environment-variables-raspbian:
@endsphinxdirective

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset1`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset10`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset2`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset3`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset4`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset5`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset6`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset7`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset8`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -7,7 +7,7 @@ snippets. Such IR is generated by the Model Optimizer. The semantics match corre
declared in `namespace opset9`.
## Table of Contents <a name="toc"></a>
## Table of Contents
* [Abs](arithmetic/Abs_1.md)
* [Acos](arithmetic/Acos_1.md)

View File

@ -120,7 +120,7 @@ For example:
| Guest VM | The Model Developer uses the Guest VM to enable access control to the completed model. <br>The Independent Software Provider uses the Guest VM to host the License Service.<br>The User uses the Guest VM to contact the License Service and run the access controlled model. |
## Prerequisites <a name="prerequisites"></a>
## Prerequisites <a name="prerequisites-ovsa"></a>
**Hardware**
* Intel® Core™ or Xeon® processor<br>
@ -140,7 +140,7 @@ This section is for the combined role of Model Developer and Independent Softwar
### Step 1: Set up Packages on the Host Machine<a name="setup-packages"></a>
Begin this step on the Intel® Core™ or Xeon® processor machine that meets the <a href="#prerequisites">prerequisites</a>.
Begin this step on the Intel® Core™ or Xeon® processor machine that meets the <a href="#prerequisites-ovsa">prerequisites</a>.
> **NOTE**: As an alternative to manually following steps 1 - 11, you can run the script `install_host_deps.sh` in the `Scripts/reference directory` under the OpenVINO™ Security Add-on repository. The script stops with an error message if it identifies any issues. If the script halts due to an error, correct the issue that caused the error and restart the script. The script runs for several minutes and provides progress information.
@ -152,7 +152,7 @@ Begin this step on the Intel® Core™ or Xeon® processor machine that meets th
* `/dev/tpm0`
* `/dev/tpmrm0`
If you do not see this information, your system does not meet the <a href="#prerequisites">prerequisites</a> to use the OpenVINO™ Security Add-on.
If you do not see this information, your system does not meet the <a href="#prerequisites-ovsa">prerequisites</a> to use the OpenVINO™ Security Add-on.
2. Make sure hardware virtualization support is enabled in the BIOS:
```sh
kvm-ok

View File

@ -23,7 +23,7 @@ By default, the application will load the specified model onto the CPU and perfo
You may be able to improve benchmark results beyond the default configuration by configuring some of the execution parameters for your model. For example, you can use "throughput" or "latency" performance hints to optimize the runtime for higher FPS or reduced inferencing time. Read on to learn more about the configuration options available with benchmark_app.
## Configuration Options
The benchmark app provides various options for configuring execution parameters. This section covers key configuration options for easily tuning benchmarking to achieve better performance on your device. A list of all configuration options is given in the [Advanced Usage](#advanced-usage) section.
The benchmark app provides various options for configuring execution parameters. This section covers key configuration options for easily tuning benchmarking to achieve better performance on your device. A list of all configuration options is given in the [Advanced Usage](#advanced-usage-cpp-benchmark) section.
### Performance hints: latency and throughput
The benchmark app allows users to provide high-level "performance hints" for setting latency-focused or throughput-focused inference modes. This hint causes the runtime to automatically adjust runtime parameters, such as the number of processing streams and inference batch size, to prioritize for reduced latency or high throughput.
@ -87,9 +87,9 @@ The benchmark tool runs benchmarking on user-provided input images in `.jpg`, `.
The tool will repeatedly loop through the provided inputs and run inferencing on them for the specified amount of time or number of iterations. If the `-i` flag is not used, the tool will automatically generate random data to fit the input shape of the model.
### Examples
For more usage examples (and step-by-step instructions on how to set up a model for benchmarking), see the [Examples of Running the Tool](#examples-of-running-the-tool) section.
For more usage examples (and step-by-step instructions on how to set up a model for benchmarking), see the [Examples of Running the Tool](#examples-of-running-the-tool-cpp) section.
## Advanced Usage
## <a name="advanced-usage-cpp-benchmark"></a> Advanced Usage
> **NOTE**: By default, OpenVINO samples, tools and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channel order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model to Intermediate Representation (IR).
@ -102,7 +102,7 @@ The application also collects per-layer Performance Measurement (PM) counters fo
Depending on the type, the report is stored to benchmark_no_counters_report.csv, benchmark_average_counters_report.csv, or benchmark_detailed_counters_report.csv file located in the path specified in -report_folder. The application also saves executable graph information serialized to an XML file if you specify a path to it with the -exec_graph_path parameter.
### <a name="all-configuration-options"></a> All configuration options
### <a name="all-configuration-options-cpp-benchmark"></a> All configuration options
Running the application with the `-h` or `--help` option yields the following usage message:
@ -197,7 +197,7 @@ Running the application with the empty list of options yields the usage message
### More information on inputs
The benchmark tool supports topologies with one or more inputs. If a topology is not data sensitive, you can skip the input parameter, and the inputs will be filled with random values. If a model has only image input(s), provide a folder with images or a path to an image as input. If a model has some specific input(s) (besides images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to it as input. If a model has mixed input types, the input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
## Examples of Running the Tool
## <a name="examples-of-running-the-tool-cpp"></a> Examples of Running the Tool
This section provides step-by-step instructions on how to run the Benchmark Tool with the `asl-recognition` model from the [Open Model Zoo](@ref model_zoo) on CPU or GPU devices. It uses random data as the input.
> **NOTE**: Internet access is required to execute the following steps successfully. If you have access to the Internet through a proxy server only, please make sure that it is configured in your OS environment.
@ -294,7 +294,7 @@ An example of the information output when running benchmark_app on CPU in latenc
[ INFO ] Max: 37.19 ms
[ INFO ] Throughput: 91.12 FPS
```
The Benchmark Tool can also be used with dynamically shaped networks to measure expected inference time for various input data shapes. See the `-shape` and `-data_shape` argument descriptions in the <a href="#all-configuration-options">All configuration options</a> section to learn more about using dynamic shapes. Here is a command example for using benchmark_app with dynamic networks and a portion of the resulting output:
The Benchmark Tool can also be used with dynamically shaped networks to measure expected inference time for various input data shapes. See the `-shape` and `-data_shape` argument descriptions in the <a href="#all-configuration-options-cpp-benchmark">All configuration options</a> section to learn more about using dynamic shapes. Here is a command example for using benchmark_app with dynamic networks and a portion of the resulting output:
```sh
./benchmark_app -m omz_models/intel/asl-recognition-0004/FP16/asl-recognition-0004.xml -d CPU -shape [-1,3,16,224,224] -data_shape [1,3,16,224,224][2,3,16,224,224][4,3,16,224,224] -pcseq

View File

@ -21,9 +21,9 @@ Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](..
| Options | Values |
| :--- | :--- |
| Validated Models | Acoustic model based on Kaldi\* neural networks (see [Model Preparation](#model-preparation) section) |
| Validated Models | Acoustic model based on Kaldi\* neural networks (see [Model Preparation](#model-preparation-speech) section) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin) |
| Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
| Supported devices | See [Execution Modes](#execution-modes-speech) section below and [List Supported Devices](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
## How It Works
@ -52,7 +52,7 @@ network.
>
> - It is not always possible to use 8-bit weights due to GNA hardware limitations. For example, convolutional layers always use 16-bit weights (GNA hardware version 1 and 2). This limitation will be removed in GNA hardware version 3 and higher.
#### Execution Modes
#### <a name="execution-modes-speech"></a> Execution Modes
Several execution modes are supported via the `-d` flag:
@ -122,7 +122,7 @@ Options:
Available target devices: CPU GNA GPU VPUX
```
### Model Preparation
### <a name="model-preparation-speech"></a> Model Preparation
You can use the following model optimizer command to convert a Kaldi nnet1 or nnet2 neural model to OpenVINO™ toolkit Intermediate Representation format:
@ -216,7 +216,7 @@ Kaldi's nnet-forward command. Since the `speech_sample` does not yet use pipes,
./speech_sample -d GNA_AUTO -bs 8 -i feat.ark -m wsj_dnn5b.xml -o scores.ark
```
OpenVINO™ toolkit Intermediate Representation `wsj_dnn5b.xml` file was generated in the previous [Model Preparation](#model-preparation) section.
OpenVINO™ toolkit Intermediate Representation `wsj_dnn5b.xml` file was generated in the previous [Model Preparation](#model-preparation-speech) section.
3. Run the Kaldi decoder to produce n-best text hypotheses and select most likely text given the WFST (`HCLG.fst`), vocabulary (`words.txt`), and TID/PID mapping (`final.mdl`):
```sh

View File

@ -19,9 +19,9 @@ Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample
| Options | Values |
| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | Acoustic model based on Kaldi* neural models (see [Model Preparation](#model-preparation) section) |
| Validated Models | Acoustic model based on Kaldi* neural models (see [Model Preparation](#model-preparation-speech-python) section) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin) |
| Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
| Supported devices | See [Execution Modes](#execution-modes-speech-python) section below and [List Supported Devices](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../samples/cpp/speech_sample/README.md) |
## How It Works
@ -51,7 +51,7 @@ model.
> - It is not always possible to use 8-bit weights due to GNA hardware limitations. For example, convolutional layers always use 16-bit weights (GNA hardware version 1 and 2). This limitation will be removed in GNA hardware version 3 and higher.
>
### Execution Modes
### <a name="execution-modes-speech-python"></a> Execution Modes
Several execution modes are supported via the `-d` flag:
@ -151,7 +151,7 @@ Options:
default value is 1.0.
```
## Model Preparation
## <a name="model-preparation-speech-python"></a> Model Preparation
You can use the following model optimizer command to convert a Kaldi nnet1 or nnet2 neural model to OpenVINO™ toolkit Intermediate Representation format:

View File

@ -23,8 +23,6 @@ class OPENVINO_API Branch : public Pattern {
public:
OPENVINO_RTTI("patternBranch");
/// \brief Creates a Branch pattern
/// \param pattern the destinationing pattern
/// \param labels Labels where the destination may occur
Branch() : Pattern(OutputVector{}) {
set_output_type(0, element::f32, Shape{});
}

View File

@ -21,7 +21,7 @@ By default, the application will load the specified model onto the CPU and perfo
You may be able to improve benchmark results beyond the default configuration by configuring some of the execution parameters for your model. For example, you can use "throughput" or "latency" performance hints to optimize the runtime for higher FPS or reduced inferencing time. Read on to learn more about the configuration options available with benchmark_app.
## Configuration Options
The benchmark app provides various options for configuring execution parameters. This section covers key configuration options for easily tuning benchmarking to achieve better performance on your device. A list of all configuration options is given in the [Advanced Usage](#advanced-usage) section.
The benchmark app provides various options for configuring execution parameters. This section covers key configuration options for easily tuning benchmarking to achieve better performance on your device. A list of all configuration options is given in the [Advanced Usage](#advanced-usage-python-benchmark) section.
### Performance hints: latency and throughput
The benchmark app allows users to provide high-level "performance hints" for setting latency-focused or throughput-focused inference modes. This hint causes the runtime to automatically adjust runtime parameters, such as the number of processing streams and inference batch size, to prioritize for reduced latency or high throughput.
@ -85,9 +85,9 @@ The benchmark tool runs benchmarking on user-provided input images in `.jpg`, `.
The tool will repeatedly loop through the provided inputs and run inferencing on them for the specified amount of time or number of iterations. If the `-i` flag is not used, the tool will automatically generate random data to fit the input shape of the model.
### Examples
For more usage examples (and step-by-step instructions on how to set up a model for benchmarking), see the [Examples of Running the Tool](#examples-of-running-the-tool) section.
For more usage examples (and step-by-step instructions on how to set up a model for benchmarking), see the [Examples of Running the Tool](#examples-of-running-the-tool-python) section.
## Advanced Usage
## <a name="advanced-usage-python-benchmark"></a> Advanced Usage
> **NOTE**: By default, OpenVINO samples, tools and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channel order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model to Intermediate Representation (IR).
@ -100,7 +100,7 @@ The application also collects per-layer Performance Measurement (PM) counters fo
Depending on the type, the report is stored to benchmark_no_counters_report.csv, benchmark_average_counters_report.csv, or benchmark_detailed_counters_report.csv file located in the path specified in -report_folder. The application also saves executable graph information serialized to an XML file if you specify a path to it with the -exec_graph_path parameter.
### <a name="all-configuration-options"></a> All configuration options
### <a name="all-configuration-options-python-benchmark"></a> All configuration options
Running the application with the `-h` or `--help` option yields the following usage message:
```
@ -235,7 +235,7 @@ Running the application with the empty list of options yields the usage message
### More information on inputs
The benchmark tool supports topologies with one or more inputs. If a topology is not data sensitive, you can skip the input parameter, and the inputs will be filled with random values. If a model has only image input(s), provide a folder with images or a path to an image as input. If a model has some specific input(s) (besides images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to it as input. If a model has mixed input types, the input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
## Examples of Running the Tool
## <a name="examples-of-running-the-tool-python"></a> Examples of Running the Tool
This section provides step-by-step instructions on how to run the Benchmark Tool with the `asl-recognition` Intel model on CPU or GPU devices. It uses random data as the input.
> **NOTE**: Internet access is required to execute the following steps successfully. If you have access to the Internet through a proxy server only, please make sure that it is configured in your OS environment.
@ -330,7 +330,7 @@ An example of the information output when running benchmark_app on CPU in latenc
[ INFO ] Throughput: 89.61 FPS
```
The Benchmark Tool can also be used with dynamically shaped networks to measure expected inference time for various input data shapes. See the -shape and -data_shape argument descriptions in the <a href="#all-configuration-options">All configuration options</a> section to learn more about using dynamic shapes. Here is a command example for using benchmark_app with dynamic networks and a portion of the resulting output:
The Benchmark Tool can also be used with dynamically shaped networks to measure expected inference time for various input data shapes. See the -shape and -data_shape argument descriptions in the <a href="#all-configuration-options-python-benchmark">All configuration options</a> section to learn more about using dynamic shapes. Here is a command example for using benchmark_app with dynamic networks and a portion of the resulting output:
```sh
benchmark_app -m omz_models/intel/asl-recognition-0004/FP16/asl-recognition-0004.xml -d CPU -shape [-1,3,16,224,224] -data_shape [1,3,16,224,224][2,3,16,224,224][4,3,16,224,224] -pcseq

View File

@ -8,9 +8,10 @@ Deep neural network find applications in many scenarios where the prediction is
x = clamp(x ; T_{low}, T_{up}) = min(max(x, T_{low}), T_{high})
\f]
where \f$T_{low}\f$ and \f$T_{up}\f$ are the lower and upper bounds for the particular protection layer, respectively.
The process flow follows the diagram [Fig 1](#Schematic). Starting from the internal representation (IR) of an OpenVINO model, the POT RangeSupervision algorithm is called to **add protection layers into the model graph**. This step requires **appropriate threshold values that are automatically extracted from a specified test dataset**. The result is an IR representation of the model with additional "RangeSupervision" layers after each supported activation layer. The original and the modified model can be called in the same way through the OpenVINO inference engine to evaluate the impact on accuracy, performance, and dependability in the presence of potential soft errors (for example using the *benchmark_app* and *accuracy_checker* functions). **The algorithm is designed to provide efficient protection at negligible performance overhead or accuracy impact in the absence of faults.** Bound extraction is a one-time effort and the protected IR model returned by the RangeSupervision algorithm can be used independently from there on. No changes in the learned parameters of the network are needed.
The process flow follows the diagram [Fig 1](#schematic-supervision). Starting from the internal representation (IR) of an OpenVINO model, the POT RangeSupervision algorithm is called to **add protection layers into the model graph**. This step requires **appropriate threshold values that are automatically extracted from a specified test dataset**. The result is an IR representation of the model with additional "RangeSupervision" layers after each supported activation layer. The original and the modified model can be called in the same way through the OpenVINO inference engine to evaluate the impact on accuracy, performance, and dependability in the presence of potential soft errors (for example using the *benchmark_app* and *accuracy_checker* functions). **The algorithm is designed to provide efficient protection at negligible performance overhead or accuracy impact in the absence of faults.** Bound extraction is a one-time effort and the protected IR model returned by the RangeSupervision algorithm can be used independently from there on. No changes in the learned parameters of the network are needed.
<a name="schematic-supervision"></a>
@anchor schematic
![Schematic](../../../../../../docs/range_supervision/images/scheme3.png)
*Fig 1: Schematic of RangeSupervision process flow.*