Added deployment guide (#11060)
* Added deployment guide * Added local distribution * Updates * Fixed more indentations
This commit is contained in:
@@ -236,11 +236,11 @@ This step is optional. It modifies the nGraph function to a device-specific oper
|
||||
|
||||
Let's explore quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model. Use [Model Downloader](@ref omz_tools_downloader) tool to download the `fp16` model from [OpenVINO™ Toolkit - Open Model Zoo repository](https://github.com/openvinotoolkit/open_model_zoo):
|
||||
```sh
|
||||
./downloader.py --name resnet-50-tf --precisions FP16-INT8
|
||||
omz_downloader --name resnet-50-tf --precisions FP16-INT8
|
||||
```
|
||||
After that you should quantize model by the [Model Quantizer](@ref omz_tools_downloader) tool.
|
||||
```sh
|
||||
./quantizer.py --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
|
||||
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
|
||||
```
|
||||
|
||||
### Inference
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# OpenVINO™ Deployment Manager Guide {#openvino_docs_install_guides_deployment_manager_tool}
|
||||
# Deployment Manager {#openvino_docs_install_guides_deployment_manager_tool}
|
||||
|
||||
The Deployment Manager is a Python* command-line tool that creates a deployment package by assembling the model, IR files, your application, and associated dependencies into a runtime package for your target device. This tool is delivered within the Intel® Distribution of OpenVINO™ toolkit for Linux*, Windows* and macOS* release packages and is available after installation in the `<INSTALL_DIR>/tools/deployment_manager` directory.
|
||||
|
||||
@@ -6,17 +6,18 @@ The Deployment Manager is a Python* command-line tool that creates a deployment
|
||||
|
||||
* Intel® Distribution of OpenVINO™ toolkit
|
||||
* To run inference on a target device other than CPU, device drivers must be pre-installed:
|
||||
* **For Linux**, see the following sections in the [installation instructions for Linux](../install_guides/installing-openvino-linux.md):
|
||||
* Steps for Intel® Processor Graphics (GPU) section
|
||||
* Steps for Intel® Neural Compute Stick 2 section
|
||||
* Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
|
||||
* **For Windows**, see the following sections in the [installation instructions for Windows](../install_guides/installing-openvino-windows.md):
|
||||
* Steps for Intel® Processor Graphics (GPU)
|
||||
* Steps for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
|
||||
* **For macOS**, see the following section in the [installation instructions for macOS](../install_guides/installing-openvino-macos.md):
|
||||
* Steps for Intel® Neural Compute Stick 2 section
|
||||
|
||||
> **IMPORTANT**: The operating system on the target system must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04.
|
||||
* **For Linux**, see the following sections in the [installation instructions for Linux](../../install_guides/installing-openvino-linux.md):
|
||||
* Steps for [Intel® Processor Graphics (GPU)](../../install_guides/configurations-for-intel-gpu.md) section
|
||||
* Steps for [Intel® Neural Compute Stick 2 section](../../install_guides/configurations-for-ncs2.md)
|
||||
* Steps for [Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](../../install_guides/installing-openvino-config-ivad-vpu.md)
|
||||
* Steps for [Intel® Gaussian & Neural Accelerator (GNA)](../../install_guides/configurations-for-intel-gna.md)
|
||||
* **For Windows**, see the following sections in the [installation instructions for Windows](../../install_guides/installing-openvino-windows.md):
|
||||
* Steps for [Intel® Processor Graphics (GPU)](../../install_guides/configurations-for-intel-gpu.md)
|
||||
* Steps for the [Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](../../install_guides/installing-openvino-config-ivad-vpu.md)
|
||||
* **For macOS**, see the following section in the [installation instructions for macOS](../../install_guides/installing-openvino-macos.md):
|
||||
* Steps for [Intel® Neural Compute Stick 2 section](../../install_guides/configurations-for-ncs2.md)
|
||||
|
||||
> **IMPORTANT**: The operating system on the target system must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04.
|
||||
|
||||
> **TIP**: If your application requires additional dependencies, including the Microsoft Visual C++ Redistributable, use the ['--user_data' option](https://docs.openvino.ai/latest/openvino_docs_install_guides_deployment_manager_tool.html#run-standard-cli-mode) to add them to the deployment archive. Install these dependencies on the target host before running inference.
|
||||
|
||||
@@ -31,77 +32,74 @@ There are two ways to create a deployment package that includes inference-relate
|
||||
.. raw:: html
|
||||
|
||||
<div class="collapsible-section" data-title="Click to expand/collapse">
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
Interactive mode provides a user-friendly command-line interface that will guide you through the process with text prompts.
|
||||
|
||||
1. To launch the Deployment Manager in interactive mode, open a new terminal window, go to the Deployment Manager tool directory and run the tool script without parameters:
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/tools/deployment_manager
|
||||
|
||||
./deployment_manager.py
|
||||
|
||||
.. tab:: Windows
|
||||
|
||||
.. code-block:: bat
|
||||
|
||||
cd <INSTALL_DIR>\deployment_tools\tools\deployment_manager
|
||||
.\deployment_manager.py
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/tools/deployment_manager
|
||||
./deployment_manager.py
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/tools/deployment_manager
|
||||
./deployment_manager.py
|
||||
|
||||
.. tab:: Windows
|
||||
|
||||
.. code-block:: bat
|
||||
|
||||
cd <INSTALL_DIR>\tools\deployment_manager
|
||||
.\deployment_manager.py
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/tools/deployment_manager
|
||||
./deployment_manager.py
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
2. The target device selection dialog is displayed:
|
||||
|
||||

|
||||

|
||||
|
||||
Use the options provided on the screen to complete selection of the target devices and press **Enter** to proceed to the package generation dialog. if you want to interrupt the generation process and exit the program, type **q** and press **Enter**.
|
||||
|
||||
3. Once you accept the selection, the package generation dialog is displayed:
|
||||
|
||||

|
||||

|
||||
|
||||
The target devices you have selected at the previous step appear on the screen. To go back and change the selection, type **b** and press **Enter**. Use the options provided to configure the generation process, or use the default settings.
|
||||
|
||||
|
||||
* `o. Change output directory` (optional): Path to the output directory. By default, it's set to your home directory.
|
||||
|
||||
* `u. Provide (or change) path to folder with user data` (optional): Path to a directory with user data (IRs, models, datasets, etc.) files and subdirectories required for inference, which will be added to the deployment archive. By default, it's set to `None`, which means you will separately copy the user data to the target system.
|
||||
|
||||
* `t. Change archive name` (optional): Deployment archive name without extension. By default, it is set to `openvino_deployment_package`.
|
||||
|
||||
|
||||
4. Once all the parameters are set, type **g** and press **Enter** to generate the package for the selected target devices. To interrupt the generation process and exit the program, type **q** and press **Enter**.
|
||||
|
||||
The script successfully completes and the deployment package is generated in the specified output directory.
|
||||
The script successfully completes and the deployment package is generated in the specified output directory.
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. raw:: html
|
||||
.. raw:: html
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
### Run Standard CLI Mode
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<div class="collapsible-section" data-title="Click to expand/collapse">
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Alternatively, you can run the Deployment Manager tool in the standard CLI mode. In this mode, you specify the target devices and other parameters as command-line arguments of the Deployment Manager Python script. This mode facilitates integrating the tool in an automation pipeline.
|
||||
@@ -113,29 +111,29 @@ To launch the Deployment Manager tool in the standard mode, open a new terminal
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
|
||||
cd <INSTALL_DIR>/tools/deployment_manager
|
||||
./deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]
|
||||
|
||||
.. tab:: Windows
|
||||
./deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]
|
||||
|
||||
.. code-block:: bat
|
||||
.. tab:: Windows
|
||||
|
||||
cd <INSTALL_DIR>\deployment_tools\tools\deployment_manager
|
||||
.. code-block:: bat
|
||||
|
||||
cd <INSTALL_DIR>\tools\deployment_manager
|
||||
.\deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/tools/deployment_manager
|
||||
./deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The following options are available:
|
||||
|
||||
* `<--targets>` (required): List of target devices to run inference. To specify more than one target, separate them with spaces. For example: `--targets cpu gpu vpu`. You can get a list of currently available targets by running the program with the `-h` option.
|
||||
* `<--targets>` (required): List of target devices to run inference. To specify more than one target, separate them with spaces. For example: `--targets cpu gpu vpu`. You can get a list of currently available targets by running the program with the `-h` option.
|
||||
|
||||
* `[--output_dir]` (optional): Path to the output directory. By default, it is set to your home directory.
|
||||
|
||||
@@ -147,46 +145,45 @@ The script successfully completes, and the deployment package is generated in th
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. raw:: html
|
||||
.. raw:: html
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Deploy Package on Target Systems
|
||||
|
||||
After the Deployment Manager has successfully completed, you can find the generated `.tar.gz` (for Linux or macOS) or `.zip` (for Windows) package in the output directory you specified.
|
||||
After the Deployment Manager has successfully completed, you can find the generated `.tar.gz` (for Linux or macOS) or `.zip` (for Windows) package in the output directory you specified.
|
||||
|
||||
To deploy the OpenVINO Runtime components from the development machine to the target system, perform the following steps:
|
||||
|
||||
1. Copy the generated archive to the target system using your preferred method.
|
||||
|
||||
2. Unpack the archive into the destination directory on the target system (if your archive name is different from the default shown below, replace the `openvino_deployment_package` with the name you use).
|
||||
@sphinxdirective
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
tar xf openvino_deployment_package.tar.gz -C <destination_dir>
|
||||
.. tab:: Linux
|
||||
|
||||
.. tab:: Windows
|
||||
|
||||
.. code-block:: bat
|
||||
|
||||
Use the archiver of your choice to unzip the file.
|
||||
.. code-block:: sh
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
tar xf openvino_deployment_package.tar.gz -C <destination_dir>
|
||||
|
||||
@endsphinxdirective
|
||||
tar xf openvino_deployment_package.tar.gz -C <destination_dir>
|
||||
|
||||
.. tab:: Windows
|
||||
|
||||
.. code-block:: bat
|
||||
|
||||
Use the archiver of your choice to unzip the file.
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
tar xf openvino_deployment_package.tar.gz -C <destination_dir>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The package is unpacked to the destination directory and the following files and subdirectories are created:
|
||||
|
||||
The package is unpacked to the destination directory and the following files and subdirectories are created:
|
||||
|
||||
* `setupvars.sh` — Copy of `setupvars.sh`
|
||||
* `runtime` — Contains the OpenVINO runtime binary files.
|
||||
* `install_dependencies` — Snapshot of the `install_dependencies` directory from the OpenVINO installation directory.
|
||||
@@ -197,32 +194,31 @@ The package is unpacked to the destination directory and the following files and
|
||||
cd <destination_dir>/openvino/install_dependencies
|
||||
sudo -E ./install_openvino_dependencies.sh
|
||||
```
|
||||
|
||||
|
||||
4. Set up the environment variables:
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <destination_dir>/openvino/
|
||||
source ./setupvars.sh
|
||||
|
||||
.. tab:: Windows
|
||||
|
||||
.. code-block:: bat
|
||||
|
||||
cd <destination_dir>\openvino\
|
||||
.\setupvars.bat
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <destination_dir>/openvino/
|
||||
source ./setupvars.sh
|
||||
|
||||
@endsphinxdirective
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <destination_dir>/openvino/
|
||||
source ./setupvars.sh
|
||||
|
||||
.. tab:: Windows
|
||||
|
||||
.. code-block:: bat
|
||||
|
||||
cd <destination_dir>\openvino\
|
||||
.\setupvars.bat
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <destination_dir>/openvino/
|
||||
source ./setupvars.sh
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
You have now finished the deployment of the OpenVINO Runtime components to the target system.
|
||||
51
docs/OV_Runtime_UG/deployment/deployment_intro.md
Normal file
51
docs/OV_Runtime_UG/deployment/deployment_intro.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Deploy with OpenVINO {#openvino_deployment_guide}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_install_guides_deployment_manager_tool
|
||||
openvino_docs_deploy_local_distribution
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Once the [OpenVINO application development](../integrate_with_your_application.md) has been finished, usually application developers need to deploy their applications to end users. There are several ways how to achieve that:
|
||||
|
||||
- Set a dependency on existing prebuilt packages (so called _centralized distribution_):
|
||||
- Using Debian / RPM packages, a recommended way for a family of Linux operation systems
|
||||
- Using pip package manager on PyPi, default approach for Python-based applications
|
||||
- Using Docker images
|
||||
- Grab a necessary functionality of OpenVINO together with your application (so-called _local distribution_):
|
||||
- Using [OpenVINO Deployment manager](deployment-manager-tool.md) providing a convinient way create a distribution package
|
||||
- Using advanced [Local distribution](local-distribution.md) approach
|
||||
- Using [static version of OpenVINO Runtime linked into the final app](https://github.com/openvinotoolkit/openvino/wiki/StaticLibraries)
|
||||
|
||||
The table below shows which distribution type can be used depending on target operation system:
|
||||
|
||||
| Distribution type | Operation systems |
|
||||
|------- ---------- | ----------------- |
|
||||
| Debian packages | Ubuntu 18.04 long-term support (LTS), 64-bit; Ubuntu 20.04 long-term support (LTS), 64-bit |
|
||||
| RMP packages | Red Hat Enterprise Linux 8, 64-bit |
|
||||
| Docker images | All operation systems |
|
||||
| PyPi (pip package manager) | All operation systems |
|
||||
| [OpenVINO Deployment Manager](deployment-manager-tool.md) | All operation systems |
|
||||
| [Local distribution](local-distribution.md) | All operation systems |
|
||||
| [Build OpenVINO statically and link into the final app](https://github.com/openvinotoolkit/openvino/wiki/StaticLibraries) | All operation systems |
|
||||
|
||||
Dependning on the distribution type, the granularity of OpenVINO packages may vary: PyPi distribution [OpenVINO has a single package 'openvino'](https://pypi.org/project/openvino/) containing all the runtime libraries and plugins, while more configurable ways like [Local distribution](local-distribution.md) provide higher granularity, so it is important to now some details about the set of libraries which are part of OpenVINO Runtime package:
|
||||
|
||||
![deployment_simplified]
|
||||
|
||||
- The main library `openvino` is used by C++ user's applications to link against with. The library provides all OpenVINO Runtime public API for both OpenVINO API 2.0 and Inference Engine, nGraph APIs. For C language applications `openvino_c` is additionally required for distribution.
|
||||
- The _optional_ plugin libraries like `openvino_intel_cpu_plugin` (matching `openvino_.+_plugin` pattern) are used to provide inference capabilities on specific devices or additional capabitilies like [Hetero execution](../hetero_execution.md) or [Multi-Device execution](../multi_device.md).
|
||||
- The _optional_ plugin libraries like `openvino_ir_frontnend` (matching `openvino_.+_frontend`) are used to provide capabilities to read models of different file formats like OpenVINO IR, ONNX or Paddle.
|
||||
|
||||
The _optional_ means that if the application does not use the capability enabled by the plugin, the plugin's library or package with the plugin is not needed in the final distribution.
|
||||
|
||||
The information above covers granularity aspects of majority distribution types, more detailed information is only needed and provided in [Local Distribution](local-distribution.md).
|
||||
|
||||
> **NOTE**: Depending on target OpenVINO devices, you also have to use [Configurations for GPU](../../install_guides/configurations-for-intel-gpu.md), [Configurations for GNA](../../install_guides/configurations-for-intel-gna.md), [Configurations for NCS2](../../install_guides/configurations-for-ncs2.md) or [Configurations for GNA](../../install_guides/installing-openvino-config-ivad-vpu.md) for proper configuration of deployed machines.
|
||||
|
||||
[deployment_simplified]: ../../img/deployment_simplified.png
|
||||
94
docs/OV_Runtime_UG/deployment/local-distribution.md
Normal file
94
docs/OV_Runtime_UG/deployment/local-distribution.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# Local distribution {#openvino_docs_deploy_local_distribution}
|
||||
|
||||
The local distribution implies that each C or C++ application / installer will have its own copies of OpenVINO Runtime binaries. However, OpenVINO has a scalable plugin-based architecture which implies that some components can be loaded in runtime only if they are really needed. So, it is important to understand which minimal set of libraries is really needed to deploy the application and this guide helps to achieve this goal.
|
||||
|
||||
> **NOTE**: The steps below are operation system independent and refer to a library file name without any prefixes (like `lib` on Unix systems) or suffixes (like `.dll` on Windows OS). Do not put `.lib` files on Windows OS to the distribution, because such files are needed only on build / linker stage.
|
||||
|
||||
Local dsitribution is also appropriate for OpenVINO binaries built from sources using [Build instructions](https://github.com/openvinotoolkit/openvino/wiki#how-to-build), but the guide below supposes OpenVINO Runtime is built dynamically. For case of [Static OpenVINO Runtime](https://github.com/openvinotoolkit/openvino/wiki/StaticLibraries) select the required OpenVINO capabilities on CMake configuration stage using [CMake Options for Custom Compilation](https://github.com/openvinotoolkit/openvino/wiki/CMakeOptionsForCustomCompilation), the build and link the OpenVINO components into the final application.
|
||||
|
||||
### C++ or C language
|
||||
|
||||
Independently on language used to write the application, `openvino` must always be put to the final distribution since is a core library which orshectrates with all the inference and frontend plugins.
|
||||
If your application is written with C language, then you need to put `openvino_c` additionally.
|
||||
|
||||
The `plugins.xml` file with information about inference devices must also be taken as support file for `openvino`.
|
||||
|
||||
> **NOTE**: in Intel Distribution of OpenVINO, `openvino` depends on TBB libraries which are used by OpenVINO Runtime to optimally saturate the devices with computations, so it must be put to the distribution package
|
||||
|
||||
### Pluggable components
|
||||
|
||||
The picture below demonstrates dependnecies between the OpenVINO Runtime core and pluggable libraries:
|
||||
|
||||
![deployment_full]
|
||||
|
||||
#### Compute devices
|
||||
|
||||
For each inference device, OpenVINO Runtime has its own plugin library:
|
||||
- `openvino_intel_cpu_plugin` for [Intel CPU devices](../supported_plugins/CPU.md)
|
||||
- `openvino_intel_gpu_plugin` for [Intel GPU devices](../supported_plugins/GPU.md)
|
||||
- Has `OpenCL` library as a dependency
|
||||
- `openvino_intel_gna_plugin` for [Intel GNA devices](../supported_plugins/GNA.md)
|
||||
- Has `gna` backend library as a dependency
|
||||
- `openvino_intel_myriad_plugin` for [Intel MYRIAD devices](../supported_plugins/MYRIAD.md)
|
||||
- Has `usb` library as a dependency
|
||||
- `openvino_intel_hddl_plugin` for [Intel HDDL device](../supported_plugins/HDDL.md)
|
||||
- Has libraries from `runtime/3rdparty/hddl` as a dependency
|
||||
- `openvino_arm_cpu_plugin` for [ARM CPU devices](../supported_plugins/ARM_CPU.md)
|
||||
|
||||
Depending on what devices is used in the app, put the appropriate libraries to the distribution package.
|
||||
|
||||
#### Execution capabilities
|
||||
|
||||
`HETERO`, `MULTI`, `BATCH`, `AUTO` execution capabilities can also be used explicitly or implicitly by the application. Use the following recommendation scheme to decide whether to put the appropriate libraries to the distribution package:
|
||||
- If [AUTO](../auto_device_selection.md) is used explicitly in the application or `ov::Core::compile_model` is used without specifying a device, put the `openvino_auto_plugin` to the distribution
|
||||
> **NOTE**: Auto device selection relies on [inference device plugins](../supported_plugins/Device_Plugins.md), so if are not sure what inference devices are available on target machine, put all inference plugin libraries to the distribution. If the `ov::device::priorities` is used for `AUTO` to specify a limited device list, grab the corresponding device plugins only.
|
||||
|
||||
- If [MULTI](../multi_device.md) is used explicitly, put the `openvino_auto_plugin` to the distribution
|
||||
- If [HETERO](../hetero_execution.md) is either used explicitly or `ov::hint::performance_mode` is used with GPU, put the `openvino_hetero_plugin` to the distribution
|
||||
- If [BATCH](../automatic_batching.md) is either used explicitly or `ov::hint::performance_mode` is used with GPU, put the `openvino_batch_plugin` to the distribution
|
||||
|
||||
#### Reading models
|
||||
|
||||
OpenVINO Runtime uses frontend libraries dynamically to read models in different formats:
|
||||
- To read OpenVINO IR `openvino_ir_frontend` is used
|
||||
- To read ONNX file format `openvino_onnx_frontend` is used
|
||||
- To read Paddle file format `openvino_paddle_frontend` is used
|
||||
|
||||
Depending on what types of model file format are used in the application in `ov::Core::read_model`, peek up the appropriate libraries.
|
||||
|
||||
> **NOTE**: The recommended way to optimize the size of final distribution package is to [convert models using Model Optimizer](../../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to OpenVINO IR, in this case you don't have to keep ONNX, Paddle and other frontend libraries in the distribution package.
|
||||
|
||||
#### (Legacy) Preprocessing via G-API
|
||||
|
||||
> **NOTE**: [G-API](../../gapi/gapi_intro.md) preprocessing is a legacy functionality, use [preprocessing capabilities from OpenVINO 2.0](../preprocessing_overview.md) which do not require any additional libraries.
|
||||
|
||||
If the application uses `InferenceEngine::PreProcessInfo::setColorFormat` or `InferenceEngine::PreProcessInfo::setResizeAlgorithm` methods, OpenVINO Runtime dynamically loads `openvino_gapi_preproc` plugin to perform preprocessing via G-API.
|
||||
|
||||
### Examples
|
||||
|
||||
#### CPU + IR in C-written application
|
||||
|
||||
C-written application performs inference on CPU and reads models stored as OpenVINO IR:
|
||||
- `openvino_c` library is a main dependency of the application. It links against this library
|
||||
- `openvino` is used as a private dependency for `openvino` and also used in the deployment
|
||||
- `openvino_intel_cpu_plugin` is used for inference
|
||||
- `openvino_ir_frontend` is used to read source model
|
||||
|
||||
#### MULTI execution on GPU and MYRIAD in tput mode
|
||||
|
||||
C++ written application performs inference [simultaneously on GPU and MYRIAD devices](../multi_device.md) with `ov::hint::PerformanceMode::THROUGHPUT` property, reads models stored in ONNX file format:
|
||||
- `openvino` library is a main dependency of the application. It links against this library
|
||||
- `openvino_intel_gpu_plugin` and `openvino_intel_myriad_plugin` are used for inference
|
||||
- `openvino_auto_plugin` is used for `MULTI` multi-device execution
|
||||
- `openvino_auto_batch_plugin` can be also put to the distribution to improve saturation of [Intel GPU](../supported_plugins/GPU.md) device. If there is no such plugin, [Automatic batching](../automatic_batching.md) is turned off.
|
||||
- `openvino_onnx_frontend` is used to read source model
|
||||
|
||||
#### Auto device selection between HDDL and CPU
|
||||
|
||||
C++ written application performs inference with [automatic device selection](../auto_device_selection.md) with device list limited to HDDL and CPU, model is [created using C++ code](../model_representation.md):
|
||||
- `openvino` library is a main dependency of the application. It links against this library
|
||||
- `openvino_auto_plugin` is used to enable automatic device selection feature
|
||||
- `openvino_intel_hddl_plugin` and `openvino_intel_cpu_plugin` are used for inference, `AUTO` selects between CPU and HDDL devices according to their physical existance on deployed machine.
|
||||
- No frontend library is needed because `ov::Model` is created in code.
|
||||
|
||||
[deployment_full]: ../../img/deployment_full.png
|
||||
@@ -14,7 +14,7 @@ Starting from OpenVINO 2022.1, Model Optimizer, Post-Training Optimization tool
|
||||
The structure of OpenVINO 2022.1 installer package has been organized as below:
|
||||
|
||||
- The `runtime` folder includes headers, libraries and CMake interfaces.
|
||||
- The `tools` folder contains [the compile tool](../../../tools/compile_tool/README.md), [deployment manager](../../install_guides/deployment-manager-tool.md) and a set of `requirements.txt` files with links to the corresponding versions of the `openvino-dev` package.
|
||||
- The `tools` folder contains [the compile tool](../../../tools/compile_tool/README.md), [deployment manager](../../OV_Runtime_UG/deployment/deployment-manager-tool.md) and a set of `requirements.txt` files with links to the corresponding versions of the `openvino-dev` package.
|
||||
- The `python` folder contains the Python version for OpenVINO Runtime.
|
||||
|
||||
## Installing OpenVINO Development Tools via PyPI
|
||||
@@ -153,7 +153,7 @@ To build applications without CMake interface, you can also use MSVC IDE, UNIX m
|
||||
|
||||
## Clearer Library Structure for Deployment
|
||||
|
||||
OpenVINO 2022.1 has reorganized the libraries to make it easier for deployment. In previous versions, to perform deployment steps, you have to use several libraries. Now you can just use `openvino` or `openvino_c` based on your developing language plus necessary plugins to complete your task. For example, `openvino_intel_cpu_plugin` and `openvino_ir_frontend` plugins will enable you to load OpenVINO IRs and perform inference on CPU device.
|
||||
OpenVINO 2022.1 has reorganized the libraries to make it easier for deployment. In previous versions, to perform deployment steps, you have to use several libraries. Now you can just use `openvino` or `openvino_c` based on your developing language plus necessary plugins to complete your task. For example, `openvino_intel_cpu_plugin` and `openvino_ir_frontend` plugins will enable you to load OpenVINO IRs and perform inference on CPU device (see [Local distribution with OpenVINO](../deployment/local-distribution.md) for more details).
|
||||
|
||||
Here you can find some detailed comparisons on library structure between OpenVINO 2022.1 and previous versions:
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
|
||||
openvino_docs_OV_Runtime_User_Guide
|
||||
openvino_2_0_transition_guide
|
||||
openvino_docs_install_guides_deployment_manager_tool
|
||||
openvino_deployment_guide
|
||||
openvino_inference_engine_tools_compile_tool_README
|
||||
|
||||
|
||||
@@ -93,7 +93,7 @@ This section provides reference documents that guide you through developing your
|
||||
With the [Model Downloader](@ref omz_tools_downloader) and [Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) guides, you will learn to download pre-trained models and convert them for use with the OpenVINO™ toolkit. You can provide your own model or choose a public or Intel model from a broad selection provided in the [Open Model Zoo](model_zoo.md).
|
||||
|
||||
## Deploying Inference
|
||||
The [OpenVINO™ Runtime User Guide](OV_Runtime_UG/openvino_intro.md) explains the process of creating your own application that runs inference with the OpenVINO™ toolkit. The [API Reference](./api_references.html) defines the OpenVINO Runtime API for Python, C++, and C. The OpenVINO Runtime API is what you'll use to create an OpenVINO™ inference application, use enhanced operations sets and other features. After writing your application, you can use the [Deployment Manager](install_guides/deployment-manager-tool.md) for deploying to target devices.
|
||||
The [OpenVINO™ Runtime User Guide](./OV_Runtime_UG/openvino_intro.md) explains the process of creating your own application that runs inference with the OpenVINO™ toolkit. The [API Reference](./api_references.html) defines the OpenVINO Runtime API for Python, C++, and C. The OpenVINO Runtime API is what you'll use to create an OpenVINO™ inference application, use enhanced operations sets and other features. After writing your application, you can use the [Deployment with OpenVINO](./OV_Runtime_UG/deployment/deployment_intro.md) for deploying to target devices.
|
||||
|
||||
## Tuning for Performance
|
||||
The toolkit provides a [Performance Optimization Guide](optimization_guide/dldt_optimization_guide.md) and utilities for squeezing the best performance out of your application, including [Accuracy Checker](@ref omz_tools_accuracy_checker), [Post-Training Optimization Tool](@ref pot_README), and other tools for measuring accuracy, benchmarking performance, and tuning your application.
|
||||
|
||||
3
docs/img/deployment_full.png
Normal file
3
docs/img/deployment_full.png
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ff1c40e8c72d33a3cdeea09f771f4a799990e9e96d0d75257b21c6e9c447c7be
|
||||
size 62393
|
||||
3
docs/img/deployment_simplified.png
Normal file
3
docs/img/deployment_simplified.png
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:08c2d103bec9bac58fc9ccb6801e950f233b93c6b034e92bc354b6fb2af86d5f
|
||||
size 25758
|
||||
@@ -32,21 +32,21 @@ The complete list of supported hardware is available in the [Release Notes](http
|
||||
> **NOTE**: You might need to install GnuPG: `sudo apt-get install gnupg`
|
||||
|
||||
2. Add the repository via the following command:
|
||||
@sphinxdirective
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Ubuntu 18
|
||||
.. tab:: Ubuntu 18
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
echo "deb https://apt.repos.intel.com/openvino/2022 bionic main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
|
||||
echo "deb https://apt.repos.intel.com/openvino/2022 bionic main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
|
||||
|
||||
.. tab:: Ubuntu 20
|
||||
.. tab:: Ubuntu 20
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
echo "deb https://apt.repos.intel.com/openvino/2022 focal main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
|
||||
echo "deb https://apt.repos.intel.com/openvino/2022 focal main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
|
||||
|
||||
@endsphinxdirective
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
3. Update the list of packages via the update command:
|
||||
|
||||
Reference in New Issue
Block a user