-\endhtmlonly
-Results may vary. For workloads and configurations visit: [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex) and [Legal Information](../Legal_Information.md).
-\htmlonly
-
+\endhtmlonly
+Results may vary. For workloads and configurations visit: [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex) and [Legal Information](../Legal_Information.md).
+\htmlonly
+
+
+\endhtmlonly
diff --git a/docs/benchmarks/performance_benchmarks_ovms.md b/docs/benchmarks/performance_benchmarks_ovms.md
new file mode 100644
index 00000000000..604a68438ed
--- /dev/null
+++ b/docs/benchmarks/performance_benchmarks_ovms.md
@@ -0,0 +1,376 @@
+# OpenVINO™ Model Server Benchmark Results {#openvino_docs_performance_benchmarks_ovms}
+
+OpenVINO™ Model Server is an open-source, production-grade inference platform that exposes a set of models via a convenient inference API over gRPC or HTTP/REST. It employs the inference engine libraries for from the Intel® Distribution of OpenVINO™ toolkit to extend workloads across Intel® hardware including CPU, GPU and others.
+
+
+
+## Measurement Methodology
+
+OpenVINO™ Model Server is measured in multiple-client-single-server configuration using two hardware platforms connected by ethernet network. The network bandwidth depends on the platforms as well as models under investigation and it is set to not be a bottleneck for workload intensity. This connection is dedicated only to the performance measurements. The benchmark setup is consists of four main parts:
+
+
+
+* **OpenVINO™ Model Server** is launched as a docker container on the server platform and it listens (and answers on) requests from clients. OpenVINO™ Model Server is run on the same machine as the OpenVINO™ toolkit benchmark application in corresponding benchmarking. Models served by OpenVINO™ Model Server are located in a local file system mounted into the docker container. The OpenVINO™ Model Server instance communicates with other components via ports over a dedicated docker network.
+
+* **Clients** are run in separated physical machine referred to as client platform. Clients are implemented in Python3 programming language based on TensorFlow* API and they work as parallel processes. Each client waits for a response from OpenVINO™ Model Server before it will send a new next request. The role played by the clients is also verification of responses.
+
+* **Load balancer** works on the client platform in a docker container. HAProxy is used for this purpose. Its main role is counting of requests forwarded from clients to OpenVINO™ Model Server, estimating its latency, and sharing this information by Prometheus service. The reason of locating the load balancer on the client site is to simulate real life scenario that includes impact of physical network on reported metrics.
+
+* **Execution Controller** is launched on the client platform. It is responsible for synchronization of the whole measurement process, downloading metrics from the load balancer, and presenting the final report of the execution.
+
+## 3D U-Net (FP32)
+
+## resnet-50-TF (INT8)
+
+## resnet-50-TF (FP32)
+
+## bert-large-uncased-whole-word-masking-squad-int8-0001 (INT8)
+
+
+## bert-large-uncased-whole-word-masking-squad-0001 (FP32)
+
+## Platform Configurations
+
+OpenVINO™ Model Server performance benchmark numbers are based on release 2021.3. Performance results are based on testing as of March 15, 2021 and may not reflect all publicly available updates.
+
+**Platform with Intel® Xeon® Gold 6252**
+
+
+
+
+
Server Platform
+
Client Platform
+
+
+
Motherboard
+
Intel® Server Board S2600WF H48104-872
+
Inspur YZMB-00882-104 NF5280M5
+
+
+
Memory
+
Hynix 16 x 16GB @ 2666 MT/s DDR4
+
Samsung 16 x 16GB @ 2666 MT/s DDR4
+
+
+
CPU
+
Intel® Xeon® Gold 6252 CPU @ 2.10GHz
+
Intel® Xeon® Platinum 8260M CPU @ 2.40GHz
+
+
+
Selected CPU Flags
+
Hyper Threading, Turbo Boost, DL Boost
+
Hyper Threading, Turbo Boost, DL Boost
+
+
+
CPU Thermal Design Power
+
150 W
+
162 W
+
+
+
Operating System
+
Ubuntu 20.04.2 LTS
+
Ubuntu 20.04.2 LTS
+
+
+
Kernel Version
+
5.4.0-65-generic
+
5.4.0-54-generic
+
+
+
BIOS Vendor
+
Intel® Corporation
+
American Megatrends Inc.
+
+
+
BIOS Version and Release Date
+
SE5C620.86B.02.01, date: 03/26/2020
+
4.1.16, date: 06/23/2020
+
+
+
Docker Version
+
20.10.3
+
20.10.3
+
+
+
Network Speed
+
40 Gb/s
+
+
+
+**Platform with Intel® Core™ i9-10920X**
+
+
+
+
+
Server Platform
+
Client Platform
+
+
+
Motherboard
+
ASUSTeK COMPUTER INC. PRIME X299-A II
+
ASUSTeK COMPUTER INC. PRIME Z370-P
+
+
+
Memory
+
Corsair 4 x 16GB @ 2666 MT/s DDR4
+
Corsair 4 x 16GB @ 2133 MT/s DDR4
+
+
+
CPU
+
Intel® Core™ i9-10920X CPU @ 3.50GHz
+
Intel® Core™ i7-8700T CPU @ 2.40GHz
+
+
+
Selected CPU Flags
+
Hyper Threading, Turbo Boost, DL Boost
+
Hyper Threading, Turbo Boost
+
+
+
CPU Thermal Design Power
+
165 W
+
35 W
+
+
+
Operating System
+
Ubuntu 20.04.1 LTS
+
Ubuntu 20.04.1 LTS
+
+
+
+
Kernel Version
+
5.4.0-52-generic
+
5.4.0-56-generic
+
+
+
BIOS Vendor
+
American Megatrends Inc.
+
American Megatrends Inc.
+
+
+
BIOS Version and Release Date
+
0603, date: 03/05/2020
+
2401, date: 07/15/2019
+
+
+
Docker Version
+
19.03.13
+
19.03.14
+
+
+
+
Network Speed
+
10 Gb/s
+
+
+
+**Platform with Intel® Core™ i7-8700T**
+
+
+
+
+
Server Platform
+
Client Platform
+
+
+
Motherboard
+
ASUSTeK COMPUTER INC. PRIME Z370-P
+
ASUSTeK COMPUTER INC. PRIME X299-A II
+
+
+
Memory
+
Corsair 4 x 16GB @ 2133 MT/s DDR4
+
Corsair 4 x 16GB @ 2666 MT/s DDR4
+
+
+
CPU
+
Intel® Core™ i7-8700T CPU @ 2.40GHz
+
Intel® Core™ i9-10920X CPU @ 3.50GHz
+
+
+
Selected CPU Flags
+
Hyper Threading, Turbo Boost
+
Hyper Threading, Turbo Boost, DL Boost
+
+
+
CPU Thermal Design Power
+
35 W
+
165 W
+
+
+
Operating System
+
Ubuntu 20.04.1 LTS
+
Ubuntu 20.04.1 LTS
+
+
+
+
Kernel Version
+
5.4.0-56-generic
+
5.4.0-52-generic
+
+
+
BIOS Vendor
+
American Megatrends Inc.
+
American Megatrends Inc.
+
+
+
BIOS Version and Release Date
+
2401, date: 07/15/2019
+
0603, date: 03/05/2020
+
+
+
Docker Version
+
19.03.14
+
19.03.13
+
+
+
+
Network Speed
+
10 Gb/s
+
+
+
+**Platform with Intel® Core™ i5-8500**
+
+
+
+
+
Server Platform
+
Client Platform
+
+
+
Motherboard
+
ASUSTeK COMPUTER INC. PRIME Z370-A
+
Gigabyte Technology Co., Ltd. Z390 UD
+
+
+
Memory
+
Corsair 2 x 16GB @ 2133 MT/s DDR4
+
029E 4 x 8GB @ 2400 MT/s DDR4
+
+
+
CPU
+
Intel® Core™ i5-8500 CPU @ 3.00GHz
+
Intel® Core™ i3-8100 CPU @ 3.60GHz
+
+
+
Selected CPU Flags
+
Turbo Boost
+
-
+
+
+
CPU Thermal Design Power
+
65 W
+
65 W
+
+
+
Operating System
+
Ubuntu 20.04.1 LTS
+
Ubuntu 20.04.1 LTS
+
+
+
Kernel Version
+
5.4.0-52-generic
+
5.4.0-52-generic
+
+
+
BIOS Vendor
+
American Megatrends Inc.
+
American Megatrends Inc.
+
+
+
BIOS Version and Release Date
+
2401, date: 07/12/2019
+
F10j, date: 09/16/2020
+
+
+
Docker Version
+
19.03.13
+
20.10.0
+
+
+
+
Network Speed
+
40 Gb/s
+
+
+
+**Platform with Intel® Core™ i3-8100**
+
+
+
+
Server Platform
+
Client Platform
+
+
+
Motherboard
+
Gigabyte Technology Co., Ltd. Z390 UD
+
ASUSTeK COMPUTER INC. PRIME Z370-A
+
+
+
Memory
+
029E 4 x 8GB @ 2400 MT/s DDR4
+
Corsair 2 x 16GB @ 2133 MT/s DDR4
+
+
+
CPU
+
Intel® Core™ i3-8100 CPU @ 3.60GHz
+
Intel® Core™ i5-8500 CPU @ 3.00GHz
+
+
+
Selected CPU Flags
+
-
+
Turbo Boost
+
+
+
CPU Thermal Design Power
+
65 W
+
65 W
+
+
+
Operating System
+
Ubuntu 20.04.1 LTS
+
Ubuntu 20.04.1 LTS
+
+
+
Kernel Version
+
5.4.0-52-generic
+
5.4.0-52-generic
+
+
+
BIOS Vendor
+
American Megatrends Inc.
+
American Megatrends Inc.
+
+
+
BIOS Version and Release Date
+
F10j, date: 09/16/2020
+
2401, date: 07/12/2019
+
+
+
Docker Version
+
20.10.0
+
19.03.13
+
+
+
+
Network Speed
+
40 Gb/s
+
+
+
+
+\htmlonly
+
+
+
+\endhtmlonly
+Results may vary. For workloads and configurations visit: [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex) and [Legal Information](../Legal_Information.md).
+\htmlonly
+
+
+\endhtmlonly
+
diff --git a/docs/benchmarks/performance_int8_vs_fp32.md b/docs/benchmarks/performance_int8_vs_fp32.md
index 42dd26b9cce..35be3673e1a 100644
--- a/docs/benchmarks/performance_int8_vs_fp32.md
+++ b/docs/benchmarks/performance_int8_vs_fp32.md
@@ -7,9 +7,9 @@ The table below illustrates the speed-up factor for the performance gain by swit
Intel® Core™ i7-8700T
-
Intel® Xeon® Gold 5218T
-
Intel® Xeon® Platinum 8270
Intel® Core™ i7-1185G7
+
Intel® Xeon® W-1290P
+
Intel® Xeon® Platinum 8270
OpenVINO benchmark model name
@@ -20,161 +20,177 @@ The table below illustrates the speed-up factor for the performance gain by swit
@@ -187,7 +203,7 @@ The following table shows the absolute accuracy drop that is calculated as the d
Intel® Core™ i9-10920X CPU @ 3.50GHZ (VNNI)
Intel® Core™ i9-9820X CPU @ 3.30GHz (AVX512)
-
Intel® Core™ i7-6700 CPU @ 4.0GHz (AVX2)
+
Intel® Core™ i7-6700K CPU @ 4.0GHz (AVX2)
Intel® Core™ i7-1185G7 CPU @ 4.0GHz (TGL VNNI)
@@ -196,176 +212,203 @@ The following table shows the absolute accuracy drop that is calculated as the d
Metric Name
Absolute Accuracy Drop, %
+
+
bert-large-uncased-whole-word-masking-squad-0001
+
SQuAD
+
F1
+
0.62
+
0.88
+
0.52
+
0.62
+
brain-tumor- segmentation- 0001-MXNET
BraTS
Dice-index@ Mean@ Overall Tumor
-
0.08
-
0.08
-
0.08
-
0.08
+
0.09
+
0.10
+
0.11
+
0.09
deeplabv3-TF
VOC 2012 Segmentation
mean_iou
-
0.73
-
1.10
-
1.10
-
0.73
+
0.09
+
0.41
+
0.41
+
0.09
densenet-121-TF
ImageNet
acc@top-1
-
0.73
-
0.72
-
0.72
-
0.73
+
0.54
+
0.57
+
0.57
+
0.54
facenet- 20180408- 102900-TF
LFW
pairwise_ accuracy _subsets
-
0.02
-
0.02
-
0.02
-
0.47
+
0.05
+
0.12
+
0.12
+
0.05
faster_rcnn_ resnet50_coco-TF
MS COCO
coco_ precision
-
0.21
-
0.20
-
0.20
-
0.21
+
0.04
+
0.04
+
0.04
+
0.04
googlenet-v1-TF
ImageNet
acc@top-1
-
0.03
0.01
+
0.00
+
0.00
0.01
-
0.03
inception-v3-TF
ImageNet
acc@top-1
-
0.03
-
0.01
-
0.01
-
0.03
+
0.04
+
0.00
+
0.00
+
0.04
mobilenet- ssd-CF
VOC2012
mAP
-
0.35
-
0.34
-
0.34
-
0.35
+
0.77
+
0.77
+
0.77
+
0.77
mobilenet-v1-1.0- 224-TF
ImageNet
acc@top-1
-
0.27
-
0.20
-
0.20
-
0.27
+
0.26
+
0.28
+
0.28
+
0.26
mobilenet-v2-1.0- 224-TF
ImageNet
acc@top-1
-
0.44
-
0.92
-
0.92
-
0.44
+
0.40
+
0.76
+
0.76
+
0.40
mobilenet-v2- PYTORCH
ImageNet
acc@top-1
-
0.25
-
7.42
-
7.42
-
0.25
+
0.36
+
0.52
+
0.52
+
0.36
resnet-18- pytorch
ImageNet
acc@top-1
-
0.26
0.25
0.25
-
0.26
+
0.25
+
0.25
resnet-50- PYTORCH
ImageNet
acc@top-1
-
0.18
0.19
+
0.21
+
0.21
0.19
-
0.18
resnet-50- TF
ImageNet
acc@top-1
-
0.15
-
0.11
-
0.11
-
0.15
+
0.10
+
0.08
+
0.08
+
0.10
squeezenet1.1- CF
ImageNet
acc@top-1
+
0.63
0.66
-
0.64
-
0.64
0.66
+
0.63
ssd_mobilenet_ v1_coco-tf
VOC2012
COCO mAp
-
0.24
-
3.07
-
3.07
-
0.24
+
0.18
+
3.06
+
3.06
+
0.18
ssd300-CF
MS COCO
COCO mAp
-
0.06
0.05
0.05
-
0.06
+
0.05
+
0.05
ssdlite_ mobilenet_ v2-TF
MS COCO
COCO mAp
-
0.14
+
0.11
0.43
0.43
-
0.14
+
0.11
yolo_v3-TF
MS COCO
COCO mAp
-
0.12
-
0.35
-
0.35
-
0.12
+
0.11
+
0.24
+
0.24
+
0.11
+
+
+
yolo_v4-TF
+
MS COCO
+
COCO mAp
+
0.01
+
0.09
+
0.09
+
0.01
+
+
+
unet-camvid- onnx-0001
+
MS COCO
+
COCO mAp
+
0.31
+
0.31
+
0.31
+
0.31
diff --git a/docs/doxygen/ie_docs.xml b/docs/doxygen/ie_docs.xml
index 010d7cac724..8ca6ff2588e 100644
--- a/docs/doxygen/ie_docs.xml
+++ b/docs/doxygen/ie_docs.xml
@@ -270,7 +270,6 @@ limitations under the License.
-
diff --git a/docs/doxygen/openvino_docs.xml b/docs/doxygen/openvino_docs.xml
index 0ca1c093271..92238645a05 100644
--- a/docs/doxygen/openvino_docs.xml
+++ b/docs/doxygen/openvino_docs.xml
@@ -100,11 +100,14 @@ limitations under the License.
-
-
-
-
-
+
+
+
+
+
+
+
+
@@ -166,6 +169,7 @@ limitations under the License.
+
@@ -181,7 +185,15 @@ limitations under the License.
-
+
+
+
+
+
+
+
+
+
diff --git a/docs/gapi/face_beautification.md b/docs/gapi/face_beautification.md
index 539b0ca9b7e..25619ae8e0b 100644
--- a/docs/gapi/face_beautification.md
+++ b/docs/gapi/face_beautification.md
@@ -12,11 +12,11 @@ This sample requires:
* PC with GNU/Linux* or Microsoft Windows* (Apple macOS* is supported but was not tested)
* OpenCV 4.2 or higher built with [Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) (building with [Intel® TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial) is a plus)
-* The following pre-trained models from the [Open Model Zoo](@ref omz_models_intel_index)
- * [face-detection-adas-0001](@ref omz_models_intel_face_detection_adas_0001_description_face_detection_adas_0001)
- * [facial-landmarks-35-adas-0002](@ref omz_models_intel_facial_landmarks_35_adas_0002_description_facial_landmarks_35_adas_0002)
+* The following pre-trained models from the [Open Model Zoo](@ref omz_models_group_intel)
+ * [face-detection-adas-0001](@ref omz_models_model_face_detection_adas_0001)
+ * [facial-landmarks-35-adas-0002](@ref omz_models_model_facial_landmarks_35_adas_0002)
-To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader_README) tool.
+To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader) tool.
## Face Beautification Algorithm
We will implement a simple face beautification algorithm using a combination of modern Deep Learning techniques and traditional Computer Vision. The general idea behind the algorithm is to make face skin smoother while preserving face features like eyes or a mouth contrast. The algorithm identifies parts of the face using a DNN inference, applies different filters to the parts found, and then combines it into the final result using basic image arithmetics:
diff --git a/docs/gapi/gapi_face_analytics_pipeline.md b/docs/gapi/gapi_face_analytics_pipeline.md
index 83dcf4594ca..6b544485668 100644
--- a/docs/gapi/gapi_face_analytics_pipeline.md
+++ b/docs/gapi/gapi_face_analytics_pipeline.md
@@ -11,12 +11,12 @@ This sample requires:
* PC with GNU/Linux* or Microsoft Windows* (Apple macOS* is supported but was not tested)
* OpenCV 4.2 or higher built with [Intel® Distribution of OpenVINO™ Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) (building with [Intel® TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial)
-* The following pre-trained models from the [Open Model Zoo](@ref omz_models_intel_index):
- * [face-detection-adas-0001](@ref omz_models_intel_face_detection_adas_0001_description_face_detection_adas_0001)
- * [age-gender-recognition-retail-0013](@ref omz_models_intel_age_gender_recognition_retail_0013_description_age_gender_recognition_retail_0013)
- * [emotions-recognition-retail-0003](@ref omz_models_intel_emotions_recognition_retail_0003_description_emotions_recognition_retail_0003)
+* The following pre-trained models from the [Open Model Zoo](@ref omz_models_group_intel):
+ * [face-detection-adas-0001](@ref omz_models_model_face_detection_adas_0001)
+ * [age-gender-recognition-retail-0013](@ref omz_models_model_age_gender_recognition_retail_0013)
+ * [emotions-recognition-retail-0003](@ref omz_models_model_emotions_recognition_retail_0003)
-To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader_README) tool.
+To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader) tool.
## Introduction: Why G-API
Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions, like Intel® Movidius™ Neural Compute Stick for deep learning offload.
@@ -26,7 +26,7 @@ Given all this manifold of options and a variety in video analytics algorithms,
Starting with version 4.2, OpenCV offers a solution to this problem. OpenCV G-API now can manage Deep Learning inference (a cornerstone of any modern analytics pipeline) with a traditional Computer Vision as well as video capturing/decoding, all in a single pipeline. G-API takes care of pipelining itself – so if the algorithm or platform changes, the execution model adapts to it automatically.
## Pipeline Overview
-Our sample application is based on [Interactive Face Detection](omz_demos_interactive_face_detection_demo_README) demo from Open Model Zoo. A simplified pipeline consists of the following steps:
+Our sample application is based on [Interactive Face Detection](@ref omz_demos_interactive_face_detection_demo_cpp) demo from Open Model Zoo. A simplified pipeline consists of the following steps:
1. Image acquisition and decode
2. Detection with preprocessing
diff --git a/docs/get_started/get_started_dl_workbench.md b/docs/get_started/get_started_dl_workbench.md
index 52f36c5b80a..795767f3c73 100644
--- a/docs/get_started/get_started_dl_workbench.md
+++ b/docs/get_started/get_started_dl_workbench.md
@@ -9,13 +9,13 @@ In this guide, you will:
[DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a web-based graphical environment that enables you to easily use various sophisticated
OpenVINO™ toolkit components:
-* [Model Downloader](@ref omz_tools_downloader_README) to download models from the [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Downloader](@ref omz_tools_downloader) to download models from the [Intel® Open Model Zoo](@ref omz_models_group_intel)
with pretrained models for a range of different tasks
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to transform models into
the Intermediate Representation (IR) format
* [Post-Training Optimization toolkit](@ref pot_README) to calibrate a model and then execute it in the
INT8 precision
-* [Accuracy Checker](@ref omz_tools_accuracy_checker_README) to determine the accuracy of a model
+* [Accuracy Checker](@ref omz_tools_accuracy_checker) to determine the accuracy of a model
* [Benchmark Tool](@ref openvino_inference_engine_samples_benchmark_app_README) to estimate inference performance on supported devices

@@ -70,10 +70,10 @@ The simplified OpenVINO™ DL Workbench workflow is:
## Run Baseline Inference
-This section illustrates a sample use case of how to infer a pretrained model from the [Intel® Open Model Zoo](@ref omz_models_intel_index) with an autogenerated noise dataset on a CPU device.
-
+This section illustrates a sample use case of how to infer a pretrained model from the [Intel® Open Model Zoo](@ref omz_models_group_intel) with an autogenerated noise dataset on a CPU device.
+\htmlonly
-
+\endhtmlonly
Once you log in to the DL Workbench, create a project, which is a combination of a model, a dataset, and a target device. Follow the steps below:
diff --git a/docs/get_started/get_started_linux.md b/docs/get_started/get_started_linux.md
index a01d5a11c67..3aa945a05a1 100644
--- a/docs/get_started/get_started_linux.md
+++ b/docs/get_started/get_started_linux.md
@@ -18,7 +18,7 @@ In addition, demo scripts, code samples and demo applications are provided to he
* **[Code Samples](../IE_DG/Samples_Overview.md)** - Small console applications that show you how to:
* Utilize specific OpenVINO capabilities in an application
* Perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.
-* **[Demo Applications](@ref omz_demos_README)** - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.
+* **[Demo Applications](@ref omz_demos)** - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.
## Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for Linux*](../install_guides/installing-openvino-linux.md).
@@ -46,9 +46,9 @@ The primary tools for deploying your models and applications are installed to th
| `samples/` | Inference Engine samples. Contains source code for C++ and Python* samples and build scripts. See the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md). |
| `src/` | Source files for CPU extensions.|
| `model_optimizer/` | Model Optimizer directory. Contains configuration scripts, scripts to run the Model Optimizer and other files. See the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
-| `open_model_zoo/` | Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_intel_index) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.|
+| `open_model_zoo/` | Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_group_intel) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.|
| `demos/` | Demo applications for inference scenarios. Also includes documentation and build scripts.|
-| `intel_models/` | Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index).|
+| `intel_models/` | Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel).|
| `tools/` | Model Downloader and Accuracy Checker tools. |
| `tools/` | Contains a symbolic link to the Model Downloader folder and auxiliary tools to work with your models: Calibration tool, Benchmark and Collect Statistics tools.|
@@ -197,7 +197,7 @@ Each demo and code sample is a separate application, but they use the same behav
* [Code Samples](../IE_DG/Samples_Overview.md) - Small console applications that show how to utilize specific OpenVINO capabilities within an application and execute specific tasks such as loading a model, running inference, querying specific device capabilities, and more.
-* [Demo Applications](@ref omz_demos_README) - Console applications that provide robust application templates to support developers in implementing specific deep learning scenarios. They may also involve more complex processing pipelines that gather analysis from several models that run inference simultaneously. For example concurrently detecting a person in a video stream and detecting attributes such as age, gender and/or emotions.
+* [Demo Applications](@ref omz_demos) - Console applications that provide robust application templates to support developers in implementing specific deep learning scenarios. They may also involve more complex processing pipelines that gather analysis from several models that run inference simultaneously. For example concurrently detecting a person in a video stream and detecting attributes such as age, gender and/or emotions.
Inputs you'll need to specify:
- **A compiled OpenVINO™ code sample or demo application** that runs inferencing against a model that has been run through the Model Optimizer, resulting in an IR, using the other inputs you provide.
@@ -209,7 +209,7 @@ Inputs you'll need to specify:
To perform sample inference, run the Image Classification code sample and Security Barrier Camera demo application that were automatically compiled when you ran the Image Classification and Inference Pipeline demo scripts. The binary files are in the `~/inference_engine_cpp_samples_build/intel64/Release` and `~/inference_engine_demos_build/intel64/Release` directories, respectively.
-To run other sample code or demo applications, build them from the source files delivered as part of the OpenVINO toolkit. To learn how to build these, see the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md) and [Demo Applications Overview](@ref omz_demos_README) sections.
+To run other sample code or demo applications, build them from the source files delivered as part of the OpenVINO toolkit. To learn how to build these, see the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md) and [Demo Applications Overview](@ref omz_demos) sections.
### Step 1: Download the Models
@@ -219,7 +219,7 @@ You must have a model that is specific for you inference task. Example model typ
- Custom (Often based on SSD)
Options to find a model suitable for the OpenVINO™ toolkit are:
-- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README).
+- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader).
- Download from GitHub*, Caffe* Zoo, TensorFlow* Zoo, etc.
- Train your own model.
@@ -449,7 +449,7 @@ Throughput: 375.3339402 FPS
### Step 5: Run the Security Barrier Camera Demo Application
-> **NOTE**: The Security Barrier Camera Demo Application is automatically compiled when you ran the Inference Pipeline demo scripts. If you want to build it manually, see the [Demo Applications Overview](@ref omz_demos_README) section.
+> **NOTE**: The Security Barrier Camera Demo Application is automatically compiled when you ran the Inference Pipeline demo scripts. If you want to build it manually, see the [Demo Applications Overview](@ref omz_demos) section.
To run the **Security Barrier Camera Demo Application** using an input image on the prepared IRs:
diff --git a/docs/get_started/get_started_macos.md b/docs/get_started/get_started_macos.md
index 14456171d60..980b02d0be2 100644
--- a/docs/get_started/get_started_macos.md
+++ b/docs/get_started/get_started_macos.md
@@ -18,7 +18,7 @@ In addition, demo scripts, code samples and demo applications are provided to he
* **[Code Samples](../IE_DG/Samples_Overview.md)** - Small console applications that show you how to:
* Utilize specific OpenVINO capabilities in an application.
* Perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.
-* **[Demo Applications](@ref omz_demos_README)** - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.
+* **[Demo Applications](@ref omz_demos)** - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.
## Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for macOS*](../install_guides/installing-openvino-macos.md).
@@ -48,9 +48,9 @@ The primary tools for deploying your models and applications are installed to th
| `~intel_models/` | Symbolic link to the `intel_models` subfolder of the `open_model_zoo` folder.|
| `model_optimizer/` | Model Optimizer directory. Contains configuration scripts, scripts to run the Model Optimizer and other files. See the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).|
| `ngraph/` | nGraph directory. Includes the nGraph header and library files. |
-| `open_model_zoo/` | Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_intel_index) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.|
+| `open_model_zoo/` | Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_group_intel) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.|
| `demos/` | Demo applications for inference scenarios. Also includes documentation and build scripts.|
-| `intel_models/` | Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index).|
+| `intel_models/` | Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel).|
| `models` | Intel's trained and public models that can be obtained with Model Downloader.|
| `tools/` | Model Downloader and Accuracy Checker tools. |
| `tools/` | Contains a symbolic link to the Model Downloader folder and auxiliary tools to work with your models: Calibration tool, Benchmark and Collect Statistics tools.|
@@ -200,7 +200,7 @@ Inputs you need to specify when using a code sample or demo application:
To perform sample inference, run the Image Classification code sample and Security Barrier Camera demo application that are automatically compiled when you run the Image Classification and Inference Pipeline demo scripts. The binary files are in the `~/inference_engine_samples_build/intel64/Release` and `~/inference_engine_demos_build/intel64/Release` directories, respectively.
-You can also build all available sample code and demo applications from the source files delivered with the OpenVINO toolkit. To learn how to do this, see the instructions in the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md) and [Demo Applications Overview](@ref omz_demos_README) sections.
+You can also build all available sample code and demo applications from the source files delivered with the OpenVINO toolkit. To learn how to do this, see the instructions in the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md) and [Demo Applications Overview](@ref omz_demos) sections.
### Step 1: Download the Models
@@ -210,7 +210,7 @@ You must have a model that is specific for you inference task. Example model typ
- Custom (Often based on SSD)
Options to find a model suitable for the OpenVINO™ toolkit are:
-- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README).
+- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader).
- Download from GitHub*, Caffe* Zoo, TensorFlow* Zoo, and other resources.
- Train your own model.
@@ -422,7 +422,7 @@ classid probability label
### Step 5: Run the Security Barrier Camera Demo Application
-> **NOTE**: The Security Barrier Camera Demo Application is automatically compiled when you run the Inference Pipeline demo scripts. If you want to build it manually, see the instructions in the [Demo Applications Overview](@ref omz_demos_README) section.
+> **NOTE**: The Security Barrier Camera Demo Application is automatically compiled when you run the Inference Pipeline demo scripts. If you want to build it manually, see the instructions in the [Demo Applications Overview](@ref omz_demos) section.
To run the **Security Barrier Camera Demo Application** using an input image on the prepared IRs:
diff --git a/docs/get_started/get_started_raspbian.md b/docs/get_started/get_started_raspbian.md
index afb821debec..5f3baf87d2f 100644
--- a/docs/get_started/get_started_raspbian.md
+++ b/docs/get_started/get_started_raspbian.md
@@ -43,8 +43,8 @@ The primary tools for deploying your models and applications are installed to th
The OpenVINO™ workflow on Raspbian* OS is as follows:
1. **Get a pre-trained model** for your inference task. If you want to use your model for inference, the model must be converted to the `.bin` and `.xml` Intermediate Representation (IR) files, which are used as input by Inference Engine. On Raspberry PI, OpenVINO™ toolkit includes only the Inference Engine module. The Model Optimizer is not supported on this platform. To get the optimized models you can use one of the following options:
- * Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README).
- For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_intel_index)
+ * Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader).
+ For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_group_intel)
* Convert a model using the Model Optimizer from a full installation of Intel® Distribution of OpenVINO™ toolkit on one of the supported platforms. Installation instructions are available:
* [Installation Guide for macOS*](../install_guides/installing-openvino-macos.md)
@@ -62,10 +62,10 @@ Follow the steps below to run pre-trained Face Detection network using Inference
```
2. Build the Object Detection Sample with the following command:
```sh
- cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
+ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp
make -j2 object_detection_sample_ssd
```
-3. Download the pre-trained Face Detection model with the [Model Downloader tool](@ref omz_tools_downloader_README):
+3. Download the pre-trained Face Detection model with the [Model Downloader tool](@ref omz_tools_downloader):
```sh
git clone --depth 1 https://github.com/openvinotoolkit/open_model_zoo
cd open_model_zoo/tools/downloader
diff --git a/docs/get_started/get_started_windows.md b/docs/get_started/get_started_windows.md
index 0255a1bb396..c8c7ee23d1f 100644
--- a/docs/get_started/get_started_windows.md
+++ b/docs/get_started/get_started_windows.md
@@ -19,7 +19,7 @@ In addition, demo scripts, code samples and demo applications are provided to he
* **[Code Samples](../IE_DG/Samples_Overview.md)** - Small console applications that show you how to:
* Utilize specific OpenVINO capabilities in an application.
* Perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.
-* **[Demo Applications](@ref omz_demos_README)** - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.
+* **[Demo Applications](@ref omz_demos)** - Console applications that provide robust application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that gather analysis data from several models that run inference simultaneously, such as detecting a person in a video stream along with detecting the person's physical attributes, such as age, gender, and emotional state.
## Intel® Distribution of OpenVINO™ toolkit Installation and Deployment Tools Directory Structure
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for Windows*](../install_guides/installing-openvino-windows.md).
@@ -45,9 +45,9 @@ The primary tools for deploying your models and applications are installed to th
| `~intel_models\` | Symbolic link to the `intel_models` subfolder of the `open_model_zoo` folder. |
| `model_optimizer\` | Model Optimizer directory. Contains configuration scripts, scripts to run the Model Optimizer and other files. See the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). |
| `ngraph\` | nGraph directory. Includes the nGraph header and library files. |
-| `open_model_zoo\` | Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_intel_index) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.|
+| `open_model_zoo\` | Open Model Zoo directory. Includes the Model Downloader tool to download [pre-trained OpenVINO](@ref omz_models_group_intel) and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy.|
| `demos\` | Demo applications for inference scenarios. Also includes documentation and build scripts.|
-| `intel_models\` | Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index).|
+| `intel_models\` | Pre-trained OpenVINO models and associated documentation. See the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel).|
| `models` | Intel's trained and public models that can be obtained with Model Downloader.|
| `tools\` | Model Downloader and Accuracy Checker tools. |
| `tools\` | Contains a symbolic link to the Model Downloader folder and auxiliary tools to work with your models: Calibration tool, Benchmark and Collect Statistics tools.|
@@ -199,7 +199,7 @@ Inputs you need to specify when using a code sample or demo application:
To perform sample inference, run the Image Classification code sample and Security Barrier Camera demo application that are automatically compiled when you run the Image Classification and Inference Pipeline demo scripts. The binary files are in the `C:\Users\\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release` and `C:\Users\\Intel\OpenVINO\inference_engine_demos_build\intel64\Release` directories, respectively.
-You can also build all available sample code and demo applications from the source files delivered with the OpenVINO™ toolkit. To learn how to do this, see the instruction in the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md) and [Demo Applications Overview](@ref omz_demos_README) sections.
+You can also build all available sample code and demo applications from the source files delivered with the OpenVINO™ toolkit. To learn how to do this, see the instruction in the [Inference Engine Code Samples Overview](../IE_DG/Samples_Overview.md) and [Demo Applications Overview](@ref omz_demos) sections.
### Step 1: Download the Models
@@ -209,7 +209,7 @@ You must have a model that is specific for you inference task. Example model typ
- Custom (Often based on SSD)
Options to find a model suitable for the OpenVINO™ toolkit are:
-- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using the [Model Downloader tool](@ref omz_tools_downloader_README).
+- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using the [Model Downloader tool](@ref omz_tools_downloader).
- Download from GitHub*, Caffe* Zoo, TensorFlow* Zoo, and other resources.
- Train your own model.
@@ -425,7 +425,7 @@ classid probability label
### Step 5: Run the Security Barrier Camera Demo Application
-> **NOTE**: The Security Barrier Camera Demo Application is automatically compiled when you run the Inference Pipeline demo scripts. If you want to build it manually, see the instructions in the [Demo Applications Overview](@ref omz_demos_README) section.
+> **NOTE**: The Security Barrier Camera Demo Application is automatically compiled when you run the Inference Pipeline demo scripts. If you want to build it manually, see the instructions in the [Demo Applications Overview](@ref omz_demos) section.
To run the **Security Barrier Camera Demo Application** using an input image on the prepared IRs:
diff --git a/docs/how_tos/how-to-links.md b/docs/how_tos/how-to-links.md
index 2f1840690ba..f263f22b5d2 100644
--- a/docs/how_tos/how-to-links.md
+++ b/docs/how_tos/how-to-links.md
@@ -44,7 +44,6 @@ To learn about what is *custom operation* and how to work with them in the Deep
[](https://www.youtube.com/watch?v=Kl1ptVb7aI8)
-
## Computer Vision with Intel
[](https://www.youtube.com/watch?v=FZZD4FCvO9c)
diff --git a/docs/img/int8vsfp32.png b/docs/img/int8vsfp32.png
index b4889ea2252..9ecbdc8be7b 100644
--- a/docs/img/int8vsfp32.png
+++ b/docs/img/int8vsfp32.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:0109b9cbc2908f786f6593de335c725f8ce5c800f37a7d79369408cc47eb8471
-size 25725
+oid sha256:e14f77f61f12c96ccf302667d51348a1e03579679155199910e3ebdf7d6adf06
+size 37915
diff --git a/docs/img/performance_benchmarks_ovms_01.png b/docs/img/performance_benchmarks_ovms_01.png
new file mode 100644
index 00000000000..54473efc5b1
--- /dev/null
+++ b/docs/img/performance_benchmarks_ovms_01.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d86125db1e295334c04e92d0645c773f679d21bf52e25dce7c887fdf972b7a28
+size 19154
diff --git a/docs/img/performance_benchmarks_ovms_02.png b/docs/img/performance_benchmarks_ovms_02.png
new file mode 100644
index 00000000000..1a39e7fbff6
--- /dev/null
+++ b/docs/img/performance_benchmarks_ovms_02.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bf8b156026d35b023e57c5cb3ea9136c93a819c1e2aa77be57d1619db4151065
+size 373890
diff --git a/docs/img/throughput_ovms_3dunet.png b/docs/img/throughput_ovms_3dunet.png
new file mode 100644
index 00000000000..261310190a5
--- /dev/null
+++ b/docs/img/throughput_ovms_3dunet.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e5a472a62de53998194bc1471539139807e00cbb75fd9edc605e7ed99b5630af
+size 18336
diff --git a/docs/img/throughput_ovms_bertlarge_fp32.png b/docs/img/throughput_ovms_bertlarge_fp32.png
new file mode 100644
index 00000000000..8fb4e484e17
--- /dev/null
+++ b/docs/img/throughput_ovms_bertlarge_fp32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f7c58da93fc7966e154bdade48d408401b097f4b0306b7c85aa4256ad72b59d
+size 18118
diff --git a/docs/img/throughput_ovms_bertlarge_int8.png b/docs/img/throughput_ovms_bertlarge_int8.png
new file mode 100644
index 00000000000..90e6e3a9426
--- /dev/null
+++ b/docs/img/throughput_ovms_bertlarge_int8.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:104d8cd5eac2d1714db85df9cba5c2cfcc113ec54d428cd6e979e75e10473be6
+size 17924
diff --git a/docs/img/throughput_ovms_resnet50_fp32.png b/docs/img/throughput_ovms_resnet50_fp32.png
new file mode 100644
index 00000000000..324acaf22ec
--- /dev/null
+++ b/docs/img/throughput_ovms_resnet50_fp32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ad19ace847da73176f20f21052f9dd23fd65779f4e1027b2debdaf8fc772c00
+size 18735
diff --git a/docs/img/throughput_ovms_resnet50_int8.png b/docs/img/throughput_ovms_resnet50_int8.png
new file mode 100644
index 00000000000..fdd92852fa9
--- /dev/null
+++ b/docs/img/throughput_ovms_resnet50_int8.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:32116d6d1acc20d8cb2fa10e290e052e3146ba1290f1c5e4aaf16a85388b6ec6
+size 19387
diff --git a/docs/index.md b/docs/index.md
index 17fe2451d3f..ee0739a1e1e 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -19,7 +19,7 @@ The following diagram illustrates the typical OpenVINO™ workflow (click to see
### Model Preparation, Conversion and Optimization
You can use your framework of choice to prepare and train a Deep Learning model or just download a pretrained model from the Open Model Zoo. The Open Model Zoo includes Deep Learning solutions to a variety of vision problems, including object recognition, face recognition, pose estimation, text detection, and action recognition, at a range of measured complexities.
-Several of these pretrained models are used also in the [code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos_README). To download models from the Open Model Zoo, the [Model Downloader](@ref omz_tools_downloader_README) tool is used.
+Several of these pretrained models are used also in the [code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos). To download models from the Open Model Zoo, the [Model Downloader](@ref omz_tools_downloader) tool is used.
One of the core component of the OpenVINO™ toolkit is the [Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) a cross-platform command-line
tool that converts a trained neural network from its source framework to an open-source, nGraph-compatible [Intermediate Representation (IR)](MO_DG/IR_and_opsets.md) for use in inference operations. The Model Optimizer imports models trained in popular frameworks such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX* and performs a few optimizations to remove excess layers and group operations when possible into simpler, faster graphs.
@@ -27,16 +27,17 @@ tool that converts a trained neural network from its source framework to an open
If your neural network model contains layers that are not in the list of known layers for supported frameworks, you can adjust the conversion and optimization process through use of [Custom Layers](HOWTO/Custom_Layers_Guide.md).
-Run the [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README) either against source topologies or against the output representation to evaluate the accuracy of inference. The Accuracy Checker is also part of the [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction), an integrated web-based performance analysis studio.
+Run the [Accuracy Checker utility](@ref omz_tools_accuracy_checker) either against source topologies or against the output representation to evaluate the accuracy of inference. The Accuracy Checker is also part of the [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction), an integrated web-based performance analysis studio.
Useful documents for model optimization:
* [Model Optimizer Developer Guide](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Intermediate Representation and Opsets](MO_DG/IR_and_opsets.md)
* [Custom Layers Guide](HOWTO/Custom_Layers_Guide.md)
-* [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README)
+* [Accuracy Checker utility](@ref omz_tools_accuracy_checker)
* [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction)
-* [Model Downloader](@ref omz_tools_downloader_README) utility
-* [Pretrained Models (Open Model Zoo)](@ref omz_models_public_index)
+* [Model Downloader](@ref omz_tools_downloader) utility
+* [Intel's Pretrained Models (Open Model Zoo)](@ref omz_models_group_intel)
+* [Public Pretrained Models (Open Model Zoo)](@ref omz_models_group_public)
### Running and Tuning Inference
The other core component of OpenVINO™ is the [Inference Engine](IE_DG/Deep_Learning_Inference_Engine_DevGuide.md), which manages the loading and compiling of the optimized neural network model, runs inference operations on input data, and outputs the results. Inference Engine can execute synchronously or asynchronously, and its plugin architecture manages the appropriate compilations for execution on multiple Intel® devices, including both workhorse CPUs and specialized graphics and video processing platforms (see below, Packaging and Deployment).
@@ -46,7 +47,7 @@ You can use OpenVINO™ Tuning Utilities with the Inference Engine to trial and
For a full browser-based studio integrating these other key tuning utilities, try the [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction).

-OpenVINO™ toolkit includes a set of [inference code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos_README) showing how inference is run and output processed for use in retail environments, classrooms, smart camera applications, and other solutions.
+OpenVINO™ toolkit includes a set of [inference code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos) showing how inference is run and output processed for use in retail environments, classrooms, smart camera applications, and other solutions.
OpenVINO also makes use of open-Source and Intel™ tools for traditional graphics processing and performance management. Intel® Media SDK supports accelerated rich-media processing, including transcoding. OpenVINO™ optimizes calls to the rich OpenCV and OpenVX libraries for processing computer vision workloads. And the new DL Streamer integration further accelerates video pipelining and performance.
@@ -54,7 +55,7 @@ Useful documents for inference tuning:
* [Inference Engine Developer Guide](IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
* [Inference Engine API References](./api_references.html)
* [Inference Code Samples](IE_DG/Samples_Overview.md)
-* [Application Demos](@ref omz_demos_README)
+* [Application Demos](@ref omz_demos)
* [Post-Training Optimization Tool Guide](@ref pot_README)
* [Deep Learning Workbench Guide](@ref workbench_docs_Workbench_DG_Introduction)
* [Intel Media SDK](https://github.com/Intel-Media-SDK/MediaSDK)
@@ -82,15 +83,15 @@ The Inference Engine's plug-in architecture can be extended to meet other specia
Intel® Distribution of OpenVINO™ toolkit includes the following components:
- [Deep Learning Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - A cross-platform command-line tool for importing models and preparing them for optimal execution with the Inference Engine. The Model Optimizer imports, converts, and optimizes models, which were trained in popular frameworks, such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX*.
-- [Deep Learning Inference Engine](IE_DG/inference_engine_intro.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU).
+- [Deep Learning Inference Engine](IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU).
- [Inference Engine Samples](IE_DG/Samples_Overview.md) - A set of simple console applications demonstrating how to use the Inference Engine in your applications.
- [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) - A web-based graphical environment that allows you to easily use various sophisticated OpenVINO™ toolkit components.
- [Post-Training Optimization tool](@ref pot_README) - A tool to calibrate a model and then execute it in the INT8 precision.
- Additional Tools - A set of tools to work with your models including [Benchmark App](../inference-engine/tools/benchmark_tool/README.md), [Cross Check Tool](../inference-engine/tools/cross_check_tool/README.md), [Compile tool](../inference-engine/tools/compile_tool/README.md).
-- [Open Model Zoo](@ref omz_models_intel_index)
- - [Demos](@ref omz_demos_README) - Console applications that provide robust application templates to help you implement specific deep learning scenarios.
- - Additional Tools - A set of tools to work with your models including [Accuracy Checker Utility](@ref omz_tools_accuracy_checker_README) and [Model Downloader](@ref omz_tools_downloader_README).
- - [Documentation for Pretrained Models](@ref omz_models_intel_index) - Documentation for pretrained models that are available in the [Open Model Zoo repository](https://github.com/opencv/open_model_zoo).
+- [Open Model Zoo](@ref omz_models_group_intel)
+ - [Demos](@ref omz_demos) - Console applications that provide robust application templates to help you implement specific deep learning scenarios.
+ - Additional Tools - A set of tools to work with your models including [Accuracy Checker Utility](@ref omz_tools_accuracy_checker) and [Model Downloader](@ref omz_tools_downloader).
+ - [Documentation for Pretrained Models](@ref omz_models_group_intel) - Documentation for pretrained models that are available in the [Open Model Zoo repository](https://github.com/opencv/open_model_zoo).
- Deep Learning Streamer (DL Streamer) – Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. DL Streamer can be installed by the Intel® Distribution of OpenVINO™ toolkit installer. Its open source version is available on [GitHub](https://github.com/opencv/gst-video-analytics). For the DL Streamer documentation, see:
- [DL Streamer Samples](@ref gst_samples_README)
- [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/)
diff --git a/docs/install_guides/installing-openvino-apt.md b/docs/install_guides/installing-openvino-apt.md
index 812c6195f2c..66518696991 100644
--- a/docs/install_guides/installing-openvino-apt.md
+++ b/docs/install_guides/installing-openvino-apt.md
@@ -6,6 +6,31 @@ This guide provides installation steps for Intel® Distribution of OpenVINO™ t
> **NOTE**: Intel® Graphics Compute Runtime for OpenCL™ is not a part of OpenVINO™ APT distribution. You can install it from the [Intel® Graphics Compute Runtime for OpenCL™ GitHub repo](https://github.com/intel/compute-runtime).
+## Included with Runtime Package
+
+The following components are installed with the OpenVINO runtime package:
+
+| Component | Description|
+|-----------|------------|
+| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
+| [OpenCV*](https://docs.opencv.org/master/) | OpenCV* community version compiled for Intel® hardware. |
+| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
+
+## Included with Developer Package
+
+The following components are installed with the OpenVINO developer package:
+
+| Component | Description|
+|-----------|------------|
+| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
+| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.|
+| [OpenCV*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware |
+| [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use the Inference Engine in your applications. |
+| [Demo Applications](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use cases. |
+| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader) and other |
+| [Documentation for Pre-Trained Models ](@ref omz_models_group_intel) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo). |
+| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer\*, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
+
## Set up the Repository
### Install the GPG key for the repository
@@ -76,7 +101,7 @@ apt-cache search openvino
## Install the runtime or developer packages using the APT Package Manager
Intel® OpenVINO will be installed in: `/opt/intel/openvino_..`
-A symlink will be created: `/opt/intel/openvino`
+A symlink will be created: `/opt/intel/openvino_`
---
### To Install a specific version
diff --git a/docs/install_guides/installing-openvino-docker-linux.md b/docs/install_guides/installing-openvino-docker-linux.md
index 7f301c5f795..12eeb0c2831 100644
--- a/docs/install_guides/installing-openvino-docker-linux.md
+++ b/docs/install_guides/installing-openvino-docker-linux.md
@@ -10,8 +10,8 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
- Ubuntu\* 18.04 long-term support (LTS), 64-bit
- Ubuntu\* 20.04 long-term support (LTS), 64-bit
-- CentOS\* 7
-- RHEL\* 8
+- CentOS\* 7.6
+- Red Hat* Enterprise Linux* 8.2 (64 bit)
**Host Operating Systems**
@@ -144,7 +144,7 @@ RUN /bin/mkdir -p '/usr/local/lib' && \
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
- cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
+ cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
```
- **CentOS 7**:
@@ -175,11 +175,11 @@ RUN /bin/mkdir -p '/usr/local/lib' && \
/bin/mkdir -p '/usr/local/include/libusb-1.0' && \
/usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' && \
/bin/mkdir -p '/usr/local/lib/pkgconfig' && \
- printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino/bin/setupvars.sh
+ printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino_2021/bin/setupvars.sh
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
- cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
+ cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
```
2. Run the Docker* image:
diff --git a/docs/install_guides/installing-openvino-linux-ivad-vpu.md b/docs/install_guides/installing-openvino-linux-ivad-vpu.md
index ab2962542d8..cd86804307c 100644
--- a/docs/install_guides/installing-openvino-linux-ivad-vpu.md
+++ b/docs/install_guides/installing-openvino-linux-ivad-vpu.md
@@ -11,9 +11,9 @@ For Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the followi
1. Set the environment variables:
```sh
-source /opt/intel/openvino/bin/setupvars.sh
+source /opt/intel/openvino_2021/bin/setupvars.sh
```
-> **NOTE**: The `HDDL_INSTALL_DIR` variable is set to `/deployment_tools/inference_engine/external/hddl`. If you installed the Intel® Distribution of OpenVINO™ to the default install directory, the `HDDL_INSTALL_DIR` was set to `/opt/intel/openvino//deployment_tools/inference_engine/external/hddl`.
+> **NOTE**: The `HDDL_INSTALL_DIR` variable is set to `/deployment_tools/inference_engine/external/hddl`. If you installed the Intel® Distribution of OpenVINO™ to the default install directory, the `HDDL_INSTALL_DIR` was set to `/opt/intel/openvino_2021//deployment_tools/inference_engine/external/hddl`.
2. Install dependencies:
```sh
@@ -52,7 +52,7 @@ E: [ncAPI] [ 965618] [MainThread] ncDeviceOpen:677 Failed to find a device,
```sh
kill -9 $(pidof hddldaemon autoboot)
pidof hddldaemon autoboot # Make sure none of them is alive
-source /opt/intel/openvino/bin/setupvars.sh
+source /opt/intel/openvino_2021/bin/setupvars.sh
${HDDL_INSTALL_DIR}/bin/bsl_reset
```
diff --git a/docs/install_guides/installing-openvino-linux.md b/docs/install_guides/installing-openvino-linux.md
index df4c0413152..955a50a0bae 100644
--- a/docs/install_guides/installing-openvino-linux.md
+++ b/docs/install_guides/installing-openvino-linux.md
@@ -22,24 +22,24 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
| Component | Description |
|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
-| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
+| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| Intel® Media SDK | Offers access to hardware accelerated video codecs and frame processing |
| [OpenCV](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware |
| [Inference Engine Code Samples](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more. |
-| [Demo Applications](@ref omz_demos_README) | A set of simple console applications that provide robust application templates to help you implement specific deep learning scenarios. |
-| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
-| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo). |
+| [Demo Applications](@ref omz_demos) | A set of simple console applications that provide robust application templates to help you implement specific deep learning scenarios. |
+| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader) and other |
+| [Documentation for Pre-Trained Models ](@ref omz_models_group_intel) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo). |
| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
**Could Be Optionally Installed**
[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
-* [Model Downloader](@ref omz_tools_downloader_README)
-* [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Downloader](@ref omz_tools_downloader)
+* [Intel® Open Model Zoo](@ref omz_models_group_intel)
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Post-training Optimization Tool](@ref pot_README)
-* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
+* [Accuracy Checker](@ref omz_tools_accuracy_checker)
* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
@@ -49,7 +49,6 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I
**Hardware**
* 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
-* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
* 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
@@ -67,6 +66,7 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I
**Operating Systems**
- Ubuntu 18.04.x long-term support (LTS), 64-bit
+- Ubuntu 20.04.0 long-term support (LTS), 64-bit
- CentOS 7.6, 64-bit (for target only)
- Yocto Project v3.0, 64-bit (for target only and requires modifications)
@@ -415,7 +415,7 @@ trusted-host = mirrors.aliyun.com
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
- For more information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
-- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
+- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
To learn more about converting models, go to:
diff --git a/docs/install_guides/installing-openvino-macos.md b/docs/install_guides/installing-openvino-macos.md
index 9489d3a3732..0797d625ca8 100644
--- a/docs/install_guides/installing-openvino-macos.md
+++ b/docs/install_guides/installing-openvino-macos.md
@@ -24,22 +24,22 @@ The following components are installed by default:
| Component | Description |
| :-------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe*, TensorFlow*, MXNet\*, and ONNX\*. |
-| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
+| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV\*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware |
| [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use the Inference Engine in your applications. |
-| [Demos](@ref omz_demos_README) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
-| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
-| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
+| [Demos](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
+| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader) and other |
+| [Documentation for Pre-Trained Models ](@ref omz_models_group_intel) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
**Could Be Optionally Installed**
[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
-* [Model Downloader](@ref omz_tools_downloader_README)
-* [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Downloader](@ref omz_tools_downloader)
+* [Intel® Open Model Zoo](@ref omz_models_group_intel)
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Post-training Optimization Tool](@ref pot_README)
-* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
+* [Accuracy Checker](@ref omz_tools_accuracy_checker)
* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
@@ -53,7 +53,6 @@ The development and target platforms have the same requirements, but you can sel
> **NOTE**: The current version of the Intel® Distribution of OpenVINO™ toolkit for macOS* supports inference on Intel CPUs and Intel® Neural Compute Sticks 2 only.
* 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
-* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
* 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel® Neural Compute Stick 2
@@ -280,7 +279,7 @@ Follow the steps below to uninstall the Intel® Distribution of OpenVINO™ Tool
- To learn more about the verification applications, see `README.txt` in `/opt/intel/openvino_2021/deployment_tools/demo/`.
-- For detailed description of the pre-trained models, go to the [Overview of OpenVINO toolkit Pre-Trained Models](@ref omz_models_intel_index) page.
+- For detailed description of the pre-trained models, go to the [Overview of OpenVINO toolkit Pre-Trained Models](@ref omz_models_group_intel) page.
- More information on [sample applications](../IE_DG/Samples_Overview.md).
diff --git a/docs/install_guides/installing-openvino-raspbian.md b/docs/install_guides/installing-openvino-raspbian.md
index eade02a472d..0695ef9e772 100644
--- a/docs/install_guides/installing-openvino-raspbian.md
+++ b/docs/install_guides/installing-openvino-raspbian.md
@@ -18,7 +18,7 @@ The OpenVINO toolkit for Raspbian OS is an archive with pre-installed header fil
| Component | Description |
| :-------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
+| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
| [OpenCV\*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware. |
| [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. |
@@ -94,12 +94,12 @@ CMake is installed. Continue to the next section to set the environment variable
You must update several environment variables before you can compile and run OpenVINO toolkit applications. Run the following script to temporarily set the environment variables:
```sh
-source /opt/intel/openvino/bin/setupvars.sh
+source /opt/intel/openvino_2021/bin/setupvars.sh
```
**(Optional)** The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
```sh
-echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
+echo "source /opt/intel/openvino_2021/bin/setupvars.sh" >> ~/.bashrc
```
To test your change, open a new terminal. You will see the following:
@@ -118,11 +118,11 @@ Continue to the next section to add USB rules for Intel® Neural Compute Stick 2
Log out and log in for it to take effect.
2. If you didn't modify `.bashrc` to permanently set the environment variables, run `setupvars.sh` again after logging in:
```sh
- source /opt/intel/openvino/bin/setupvars.sh
+ source /opt/intel/openvino_2021/bin/setupvars.sh
```
3. To perform inference on the Intel® Neural Compute Stick 2, install the USB rules running the `install_NCS_udev_rules.sh` script:
```sh
- sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
+ sh /opt/intel/openvino_2021/install_dependencies/install_NCS_udev_rules.sh
```
4. Plug in your Intel® Neural Compute Stick 2.
@@ -138,14 +138,13 @@ Follow the next steps to run pre-trained Face Detection network using Inference
```
2. Build the Object Detection Sample:
```sh
- cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
+ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp
```
-
```sh
make -j2 object_detection_sample_ssd
```
3. Download the pre-trained Face Detection model with the Model Downloader or copy it from the host machine:
- ```sh
+ ```sh
git clone --depth 1 https://github.com/openvinotoolkit/open_model_zoo
cd open_model_zoo/tools/downloader
python3 -m pip install -r requirements.in
@@ -165,9 +164,9 @@ Read the next topic if you want to learn more about OpenVINO workflow for Raspbe
If you want to use your model for inference, the model must be converted to the .bin and .xml Intermediate Representation (IR) files that are used as input by Inference Engine. OpenVINO™ toolkit support on Raspberry Pi only includes the Inference Engine module of the Intel® Distribution of OpenVINO™ toolkit. The Model Optimizer is not supported on this platform. To get the optimized models you can use one of the following options:
-* Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README).
+* Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader).
- For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_intel_index)
+ For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_group_intel)
* Convert the model using the Model Optimizer from a full installation of Intel® Distribution of OpenVINO™ toolkit on one of the supported platforms. Installation instructions are available:
diff --git a/docs/install_guides/installing-openvino-windows.md b/docs/install_guides/installing-openvino-windows.md
index 8de98761d15..56e963d1ea4 100644
--- a/docs/install_guides/installing-openvino-windows.md
+++ b/docs/install_guides/installing-openvino-windows.md
@@ -16,11 +16,10 @@ Your installation is complete when these are all completed:
2. Install the dependencies:
- - [Microsoft Visual Studio* with C++ **2019 or 2017** with MSBuild](http://visualstudio.microsoft.com/downloads/)
- - [CMake **3.10 or higher** 64-bit](https://cmake.org/download/)
- > **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
+ - [Microsoft Visual Studio* 2019 with MSBuild](http://visualstudio.microsoft.com/downloads/)
+ - [CMake 3.14 or higher 64-bit](https://cmake.org/download/)
- [Python **3.6** - **3.8** 64-bit](https://www.python.org/downloads/windows/)
- > **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable.
+ > **IMPORTANT**: As part of this installation, make sure you click the option **[Add Python 3.x to PATH](https://docs.python.org/3/using/windows.html#installation-steps)** to add Python to your `PATH` environment variable.
3. Set Environment Variables
@@ -58,22 +57,22 @@ The following components are installed by default:
| Component | Description |
|:---------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) |This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. NOTE: Popular frameworks include such frameworks as Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
-|[Inference Engine](../IE_DG/inference_engine_intro.md) |This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
+|[Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) |This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
|[OpenCV\*](https://docs.opencv.org/master/) |OpenCV* community version compiled for Intel® hardware |
|[Inference Engine Samples](../IE_DG/Samples_Overview.md) |A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. |
-| [Demos](@ref omz_demos_README) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
-| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
-| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
+| [Demos](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
+| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader) and other |
+| [Documentation for Pre-Trained Models ](@ref omz_models_group_intel) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
**Could Be Optionally Installed**
[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
-* [Model Downloader](@ref omz_tools_downloader_README)
-* [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Downloader](@ref omz_tools_downloader)
+* [Intel® Open Model Zoo](@ref omz_models_group_intel)
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Post-training Optimization Tool](@ref pot_README)
-* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
+* [Accuracy Checker](@ref omz_tools_accuracy_checker)
* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
@@ -83,7 +82,6 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I
**Hardware**
* 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors
-* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
* 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake)
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
@@ -134,12 +132,9 @@ The screen example below indicates you are missing two dependencies:
You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the `setupvars.bat` batch file to temporarily set your environment variables:
```sh
-cd C:\Program Files (x86)\Intel\openvino_2021\bin\
-```
-
-```sh
-setupvars.bat
+"C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat"
```
+> **IMPORTANT**: Windows PowerShell* is not recommended to run the configuration commands, please use the Command Prompt instead.
(Optional): OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually.
@@ -314,7 +309,7 @@ Use these steps to update your Windows `PATH` if a command you execute returns a
5. If you need to add CMake to the `PATH`, browse to the directory in which you installed CMake. The default directory is `C:\Program Files\CMake`.
-6. If you need to add Python to the `PATH`, browse to the directory in which you installed Python. The default directory is `C:\Users\\AppData\Local\Programs\Python\Python36\Python`.
+6. If you need to add Python to the `PATH`, browse to the directory in which you installed Python. The default directory is `C:\Users\\AppData\Local\Programs\Python\Python36\Python`. Note that the `AppData` folder is hidden by default. To view hidden files and folders, see the [Windows 10 instructions](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-10-97fbc472-c603-9d90-91d0-1166d1d9f4b5).
7. Click **OK** repeatedly to close each screen.
@@ -350,11 +345,11 @@ To learn more about converting deep learning models, go to:
- [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit)
- [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
-- [Introduction to Inference Engine](../IE_DG/inference_engine_intro.md)
+- [Introduction to Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
-- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
+- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel)
- [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
diff --git a/docs/install_guides/installing-openvino-yum.md b/docs/install_guides/installing-openvino-yum.md
index 5fc6143ae51..27e464d1b84 100644
--- a/docs/install_guides/installing-openvino-yum.md
+++ b/docs/install_guides/installing-openvino-yum.md
@@ -6,6 +6,18 @@ This guide provides installation steps for the Intel® Distribution of OpenVINO
> **NOTE**: Intel® Graphics Compute Runtime for OpenCL™ is not a part of OpenVINO™ YUM distribution. You can install it from the [Intel® Graphics Compute Runtime for OpenCL™ GitHub repo](https://github.com/intel/compute-runtime).
+> **NOTE**: Only runtime packages are available via the YUM repository.
+
+## Included with Runtime Package
+
+The following components are installed with the OpenVINO runtime package:
+
+| Component | Description|
+|-----------|------------|
+| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
+| [OpenCV*](https://docs.opencv.org/master/) | OpenCV* community version compiled for Intel® hardware. |
+| Deep Learning Stream (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
+
## Set up the Repository
> **NOTE:** You must be logged in as root to set up and install the repository.
@@ -61,7 +73,7 @@ Results:
intel-openvino-2021 Intel(R) Distribution of OpenVINO 2021
```
-### To list the available OpenVINO packages
+### To list available OpenVINO packages
Use the following command:
```sh
yum list intel-openvino*
@@ -69,11 +81,11 @@ yum list intel-openvino*
---
-## Install the runtime packages Using the YUM Package Manager
+## Install Runtime Packages Using the YUM Package Manager
Intel® OpenVINO will be installed in: `/opt/intel/openvino_..`
-A symlink will be created: `/opt/intel/openvino`
+A symlink will be created: `/opt/intel/openvino_`
---
diff --git a/docs/install_guides/movidius-setup-guide.md b/docs/install_guides/movidius-setup-guide.md
index 421dfbab402..c26ebbda38d 100644
--- a/docs/install_guides/movidius-setup-guide.md
+++ b/docs/install_guides/movidius-setup-guide.md
@@ -46,7 +46,7 @@ The `hddldaemon` is a system service, a binary executable that is run to manage
`` refers to the following default OpenVINO™ Inference Engine directories:
- **Linux:**
```
- /opt/intel/openvino/inference_engine
+ /opt/intel/openvino_2021/inference_engine
```
- **Windows:**
```
diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md
index 3da7e3c1088..b8c3dcc3e52 100644
--- a/docs/install_guides/pypi-openvino-dev.md
+++ b/docs/install_guides/pypi-openvino-dev.md
@@ -13,7 +13,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio
| Component | Description |
|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
-| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_README.html), [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html) |
+| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html), [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html) |
**The Runtime Package Includes the Following Components Installed by Dependency:**
diff --git a/docs/ops/detection/ExperimentalDetectronDetectionOutput_6.md b/docs/ops/detection/ExperimentalDetectronDetectionOutput_6.md
index 69411e3f31f..48450817c5b 100644
--- a/docs/ops/detection/ExperimentalDetectronDetectionOutput_6.md
+++ b/docs/ops/detection/ExperimentalDetectronDetectionOutput_6.md
@@ -97,7 +97,7 @@ tensor elements.
* *class_agnostic_box_regression*
- * **Description**: *class_agnostic_box_regression* attribute ia a flag specifies whether to delete background
+ * **Description**: *class_agnostic_box_regression* attribute is a flag that specifies whether to delete background
classes or not.
* **Range of values**:
* `true` means background classes should be deleted
diff --git a/docs/ops/detection/ExperimentalDetectronROIFeatureExtractor_6.md b/docs/ops/detection/ExperimentalDetectronROIFeatureExtractor_6.md
index 407c4301dc4..2eb40fd6978 100644
--- a/docs/ops/detection/ExperimentalDetectronROIFeatureExtractor_6.md
+++ b/docs/ops/detection/ExperimentalDetectronROIFeatureExtractor_6.md
@@ -136,4 +136,4 @@ must be the same as for 1 input: `[number_of_ROIs, 4]`.
-```
\ No newline at end of file
+```
diff --git a/docs/optimization_guide/dldt_optimization_guide.md b/docs/optimization_guide/dldt_optimization_guide.md
index 2c13d91d206..87fb3d26b4d 100644
--- a/docs/optimization_guide/dldt_optimization_guide.md
+++ b/docs/optimization_guide/dldt_optimization_guide.md
@@ -445,7 +445,7 @@ There are important performance caveats though: for example, the tasks that run
Also, if the inference is performed on the graphics processing unit (GPU), it can take little gain to do the encoding, for instance, of the resulting video, on the same GPU in parallel, because the device is already busy.
-Refer to the [Object Detection SSD Demo](@ref omz_demos_object_detection_demo_ssd_async_README) (latency-oriented Async API showcase) and [Benchmark App Sample](../../inference-engine/samples/benchmark_app/README.md) (which has both latency and throughput-oriented modes) for complete examples of the Async API in action.
+Refer to the [Object Detection SSD Demo](@ref omz_demos_object_detection_demo_cpp) (latency-oriented Async API showcase) and [Benchmark App Sample](../../inference-engine/samples/benchmark_app/README.md) (which has both latency and throughput-oriented modes) for complete examples of the Async API in action.
## Using Tools
diff --git a/docs/ovsa/ovsa_get_started.md b/docs/ovsa/ovsa_get_started.md
index e9062dc7670..19678297eb7 100644
--- a/docs/ovsa/ovsa_get_started.md
+++ b/docs/ovsa/ovsa_get_started.md
@@ -20,7 +20,7 @@ The OpenVINO™ Security Add-on consists of three components that run in Kernel-
- The Model Developer generates a access controlled model from the OpenVINO™ toolkit output. The access controlled model uses the model's Intermediate Representation (IR) files to create a access controlled output file archive that are distributed to Model Users. The Developer can also put the archive file in long-term storage or back it up without additional security.
-- The Model Developer uses the OpenVINO™ Security Add-on Tool(`ovsatool`) to generate and manage cryptographic keys and related collateral for the access controlled models. Cryptographic material is only available in a virtual machine (VM) environment. The OpenVINO™ Security Add-on key management system lets the Model Developer to get external Certificate Authorities to generate certificates to add to a key-store.
+- The Model Developer uses the OpenVINO™ Security Add-on Tool (ovsatool) to generate and manage cryptographic keys and related collateral for the access controlled models. Cryptographic material is only available in a virtual machine (VM) environment. The OpenVINO™ Security Add-on key management system lets the Model Developer to get external Certificate Authorities to generate certificates to add to a key-store.
- The Model Developer generates user-specific licenses in a JSON format file for the access controlled model. The Model Developer can define global or user-specific licenses and attach licensing policies to the licenses. For example, the Model Developer can add a time limit for a model or limit the number of times a user can run a model.
@@ -31,7 +31,7 @@ The OpenVINO™ Security Add-on consists of three components that run in Kernel-
- The Independent Software Vendor hosts the OpenVINO™ Security Add-on License Service, which responds to license validation requests when a user attempts to load a access controlled model in a model server. The licenses are registered with the OpenVINO™ Security Add-on License Service.
-- When a user loads the model, the OpenVINO™ Security Add-on Runtime contacts the License Service to make sure the license is valid and within the parameters that the Model Developer defined with the OpenVINO™ Security Add-on Tool(`ovsatool`). The user must be able to reach the Independent Software Vendor's License Service over the Internet.
+- When a user loads the model, the OpenVINO™ Security Add-on Runtime contacts the License Service to make sure the license is valid and within the parameters that the Model Developer defined with the OpenVINO™ Security Add-on Tool (ovsatool). The user must be able to reach the Independent Software Vendor's License Service over the Internet.
@@ -51,6 +51,8 @@ After the license is successfully validated, the OpenVINO™ Model Server loads

+The binding between SWTPM (vTPM used in guest VM) and HW TPM (TPM on the host) is explained in [this document](https://github.com/openvinotoolkit/security_addon/blob/release_2021_3/docs/fingerprint-changes.md)
+
## About the Installation
The Model Developer, Independent Software Vendor, and User each must prepare one physical hardware machine and one Kernel-based Virtual Machine (KVM). In addition, each person must prepare a Guest Virtual Machine (Guest VM) for each role that person plays.
@@ -248,8 +250,12 @@ See the QEMU documentation for more information about the QEMU network configura
Networking is set up on the Host Machine. Continue to the Step 3 to prepare a Guest VM for the combined role of Model Developer and Independent Software Vendor.
-
-### Step 3: Set Up one Guest VM for the combined roles of Model Developer and Independent Software Vendor
+### Step 3: Clone the OpenVINO™ Security Add-on
+
+Download the [OpenVINO™ Security Add-on](https://github.com/openvinotoolkit/security_addon).
+
+
+### Step 4: Set Up one Guest VM for the combined roles of Model Developer and Independent Software Vendor.
For each separate role you play, you must prepare a virtual machine, called a Guest VM. Because in this release, the Model Developer and Independent Software Vendor roles are combined, these instructions guide you to set up one Guest VM, named `ovsa_isv`.
@@ -299,15 +305,28 @@ As an option, you can use `virsh` and the virtual machine manager to create and
Installation information is at https://github.com/tpm2-software/tpm2-tools/blob/master/INSTALL.md
4. Install the [Docker packages](https://docs.docker.com/engine/install/ubuntu/)
5. Shut down the Guest VM.
-9. On the host, create a directory to support the virtual TPM device. Only `root` should have read/write permission to this directory:
+9. On the host, create a directory to support the virtual TPM device and provision its certificates. Only `root` should have read/write permission to this directory:
```sh
sudo mkdir -p /var/OVSA/
sudo mkdir /var/OVSA/vtpm
sudo mkdir /var/OVSA/vtpm/vtpm_isv_dev
+
+ export XDG_CONFIG_HOME=~/.config
+ /usr/share/swtpm/swtpm-create-user-config-files
+ swtpm_setup --tpmstate /var/OVSA/vtpm/vtpm_isv_dev --create-ek-cert --create-platform-cert --overwrite --tpm2 --pcr-banks -
```
**NOTE**: For steps 10 and 11, you can copy and edit the script named `start_ovsa_isv_dev_vm.sh` in the `Scripts/reference` directory in the OpenVINO™ Security Add-on repository instead of manually running the commands. If using the script, select the script with `isv` in the file name regardless of whether you are playing the role of the Model Developer or the role of the Independent Software Vendor. Edit the script to point to the correct directory locations and increment `vnc` for each Guest VM.
-10. Start the vTPM on Host:
+10. Start the vTPM on Host, write the HW TPM data into its NVRAM and restart the vTPM for QEMU:
```sh
+ sudo swtpm socket --tpm2 --server port=8280 \
+ --ctrl type=tcp,port=8281 \
+ --flags not-need-init --tpmstate dir=/var/OVSA/vtpm/vtpm_isv_dev &
+
+ sudo tpm2_startup --clear -T swtpm:port=8280
+ sudo tpm2_startup -T swtpm:port=8280
+ python3 /Scripts/host/OVSA_write_hwquote_swtpm_nvram.py 8280
+ sudo pkill -f vtpm_isv_dev
+
swtpm socket --tpmstate dir=/var/OVSA/vtpm/vtpm_isv_dev \
--tpm2 \
--ctrl type=unixio,path=/var/OVSA/vtpm/vtpm_isv_dev/swtpm-sock \
@@ -335,9 +354,9 @@ As an option, you can use `virsh` and the virtual machine manager to create and
12. Use a VNC client to log on to the Guest VM at `:1`
-### Step 4: Set Up one Guest VM for the User role
+### Step 5: Set Up one Guest VM for the User role
-1. Choose ONE of these options to create a Guest VM for the User role:
+1. Choose **ONE** of these options to create a Guest VM for the User role:
**Option 1: Copy and Rename the `ovsa_isv_dev_vm_disk.qcow2` disk image**
1. Copy the `ovsa_isv_dev_vm_disk.qcow2` disk image to a new image named `ovsa_runtime_vm_disk.qcow2`. You created the `ovsa_isv_dev_vm_disk.qcow2` disk image in Step 3.
2. Boot the new image.
@@ -383,7 +402,7 @@ As an option, you can use `virsh` and the virtual machine manager to create and
-netdev tap,id=hostnet1,script=/virbr0-qemu-ifup, downscript=/virbr0-qemu-ifdown \
-vnc :2
```
- 7. Choose ONE of these options to install additional required software:
+ 7. Choose **ONE** of these options to install additional required software:
**Option 1: Use a script to install additional software**
1. Copy the script `install_guest_deps.sh` from the `Scripts/reference` directory of the OVSA repository to the Guest VM
@@ -400,19 +419,32 @@ As an option, you can use `virsh` and the virtual machine manager to create and
4. Install the [Docker packages](https://docs.docker.com/engine/install/ubuntu/)
5. Shut down the Guest VM.
-2. Create a directory to support the virtual TPM device. Only `root` should have read/write permission to this directory:
+2. Create a directory to support the virtual TPM device and provision its certificates. Only `root` should have read/write permission to this directory:
```sh
sudo mkdir /var/OVSA/vtpm/vtpm_runtime
+
+ export XDG_CONFIG_HOME=~/.config
+ /usr/share/swtpm/swtpm-create-user-config-files
+ swtpm_setup --tpmstate /var/OVSA/vtpm/vtpm_runtime --create-ek-cert --create-platform-cert --overwrite --tpm2 --pcr-banks -
```
- **NOTE**: For steps 3 and 4, you can copy and edit the script named `start_ovsa_runtime_vm.sh` in the scripts directory in the OpenVINO™ Security Add-on repository instead of manually running the commands. Edit the script to point to the correct directory locations and increment `vnc` for each Guest VM. This means that if you are creating a third Guest VM on the same Host Machine, change `-vnc :2` to `-vnc :3`
-3. Start the vTPM:
+ **NOTE**: For steps 3 and 4, you can copy and edit the script named `start_ovsa_runtime_vm.sh` in the `Scripts/reference` directory in the OpenVINO™ Security Add-on repository instead of manually running the commands. Edit the script to point to the correct directory locations and increment `vnc` for each Guest VM. This means that if you are creating a third Guest VM on the same Host Machine, change `-vnc :2` to `-vnc :3`
+3. Start the vTPM, write the HW TPM data into its NVRAM and restart the vTPM for QEMU:
```sh
+ sudo swtpm socket --tpm2 --server port=8380 \
+ --ctrl type=tcp,port=8381 \
+ --flags not-need-init --tpmstate dir=/var/OVSA/vtpm/vtpm_runtime &
+
+ sudo tpm2_startup --clear -T swtpm:port=8380
+ sudo tpm2_startup -T swtpm:port=8380
+ python3 /Scripts/host/OVSA_write_hwquote_swtpm_nvram.py 8380
+ sudo pkill -f vtpm_runtime
+
swtpm socket --tpmstate dir=/var/OVSA/vtpm/vtpm_runtime \
--tpm2 \
--ctrl type=unixio,path=/var/OVSA/vtpm/vtpm_runtime/swtpm-sock \
--log level=20
```
-4. Start the Guest VM in a new terminal. To do so, either copy and edit the script named `start_ovsa_runtime_vm.sh` in the scripts directory in the OpenVINO™ Security Add-on repository or manually run the command:
+4. Start the Guest VM in a new terminal:
```sh
sudo qemu-system-x86_64 \
-cpu host \
@@ -450,13 +482,11 @@ Building OpenVINO™ Security Add-on depends on OpenVINO™ Model Server docker
This step is for the combined role of Model Developer and Independent Software Vendor, and the User
-1. Download the [OpenVINO™ Security Add-on](https://github.com/openvinotoolkit/security_addon)
-
-2. Go to the top-level OpenVINO™ Security Add-on source directory.
+1. Go to the top-level OpenVINO™ Security Add-on source directory cloned earlier.
```sh
cd security_addon
```
-3. Build the OpenVINO™ Security Add-on:
+2. Build the OpenVINO™ Security Add-on:
```sh
make clean all
sudo make package
@@ -559,7 +589,7 @@ The Model Hosting components install the OpenVINO™ Security Add-on Runtime Doc
This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable set up steps and installation steps before beginning this section.
-This document uses the [face-detection-retail-0004](@ref omz_models_intel_face_detection_retail_0004_description_face_detection_retail_0004) model as an example.
+This document uses the [face-detection-retail-0004](@ref omz_models_model_face_detection_retail_0044) model as an example.
The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User.
@@ -577,7 +607,7 @@ The Model Developer creates model, defines access control and creates the user l
```sh
sudo -s
cd //OVSA/artefacts
- export OVSA_RUNTIME_ARTEFACTS=$PWD
+ export OVSA_DEV_ARTEFACTS=$PWD
source /opt/ovsa/scripts/setupvars.sh
```
2. Create files to request a certificate:
@@ -622,7 +652,7 @@ This example uses `curl` to download the `face-detection-retail-004` model from
```
3. Define and enable the model access control and master license:
```sh
- /opt/ovsa/bin/ovsatool protect -i model/face-detection-retail-0004.xml model/face-detection-retail-0004.bin -n "face detection" -d "face detection retail" -v 0004 -p face_detection_model.dat -m face_detection_model.masterlic -k isv_keystore -g
+ /opt/ovsa/bin/ovsatool controlAccess -i model/face-detection-retail-0004.xml model/face-detection-retail-0004.bin -n "face detection" -d "face detection retail" -v 0004 -p face_detection_model.dat -m face_detection_model.masterlic -k isv_keystore -g
```
The Intermediate Representation files for the `face-detection-retail-0004` model are encrypted as `face_detection_model.dat` and a master license is generated as `face_detection_model.masterlic`.
@@ -703,6 +733,7 @@ This example uses scp to share data between the ovsa_runtime and ovsa_dev Guest
cd $OVSA_RUNTIME_ARTEFACTS
scp custkeystore.csr.crt username@://OVSA/artefacts
```
+
#### Step 3: Receive and load the access controlled model into the OpenVINO™ Model Server
1. Receive the model as files named
* `face_detection_model.dat`
@@ -736,14 +767,15 @@ This example uses scp to share data between the ovsa_runtime and ovsa_dev Guest
"model_config_list":[
{
"config":{
- "name":"protected-model",
+ "name":"controlled-access-model",
"base_path":"/sampleloader/model/fd",
- "custom_loader_options": {"loader_name": "ovsa", "keystore": "custkeystore", "protected_file": "face_detection_model"}
+ "custom_loader_options": {"loader_name": "ovsa", "keystore": "custkeystore", "controlled_access_file": "face_detection_model"}
}
}
]
}
```
+
#### Step 4: Start the NGINX Model Server
The NGINX Model Server publishes the access controlled model.
```sh
@@ -773,11 +805,12 @@ For information about the NGINX interface, see https://github.com/openvinotoolki
```sh
curl --create-dirs https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/images/people/people1.jpeg -o images/people1.jpeg
```
+
#### Step 6: Run Inference
Run the `face_detection.py` script:
```sh
-python3 face_detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert server.pem --client_cert client.pem --client_key client.key --model_name protected-model
+python3 face_detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert server.pem --client_cert client.pem --client_key client.key --model_name controlled-access-model
```
## Summary
diff --git a/docs/resources/introduction.md b/docs/resources/introduction.md
index 6a3c4ccfaa4..4a62ebef562 100644
--- a/docs/resources/introduction.md
+++ b/docs/resources/introduction.md
@@ -8,14 +8,14 @@
## Demos
-- [Demos](@ref omz_demos_README)
+- [Demos](@ref omz_demos)
## Additional Tools
-- A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other
+- A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader) and other
## Pre-Trained Models
-- [Intel's Pre-trained Models from Open Model Zoo](@ref omz_models_intel_index)
-- [Public Pre-trained Models Available with OpenVINO™ from Open Model Zoo](@ref omz_models_public_index)
\ No newline at end of file
+- [Intel's Pre-trained Models from Open Model Zoo](@ref omz_models_group_intel)
+- [Public Pre-trained Models Available with OpenVINO™ from Open Model Zoo](@ref omz_models_group_public)
\ No newline at end of file
diff --git a/inference-engine/ie_bridges/python/sample/hello_classification/README.md b/inference-engine/ie_bridges/python/sample/hello_classification/README.md
index 730d6a2f0e7..c02725ebd7d 100644
--- a/inference-engine/ie_bridges/python/sample/hello_classification/README.md
+++ b/inference-engine/ie_bridges/python/sample/hello_classification/README.md
@@ -118,4 +118,4 @@ The sample application logs each step in a standard output stream and outputs to
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
-[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
\ No newline at end of file
+[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
diff --git a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md
index 4845f031079..2c5dac57b23 100644
--- a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md
+++ b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md
@@ -117,4 +117,4 @@ The sample application logs each step in a standard output stream and creates an
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[IENetwork.reshape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a6683f0291db25f908f8d6720ab2f221a
-[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
\ No newline at end of file
+[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
diff --git a/inference-engine/samples/benchmark_app/README.md b/inference-engine/samples/benchmark_app/README.md
index d3aa8b5e489..49154897462 100644
--- a/inference-engine/samples/benchmark_app/README.md
+++ b/inference-engine/samples/benchmark_app/README.md
@@ -128,7 +128,7 @@ If a model has only image input(s), please provide a folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
+To run the tool, you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
@@ -200,4 +200,4 @@ Below are fragments of sample output for CPU and FPGA devices:
## See Also
* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
-* [Model Downloader](@ref omz_tools_downloader_README)
+* [Model Downloader](@ref omz_tools_downloader)
diff --git a/inference-engine/tools/benchmark_tool/README.md b/inference-engine/tools/benchmark_tool/README.md
index 1c213f67f1f..1eacb8f56ad 100644
--- a/inference-engine/tools/benchmark_tool/README.md
+++ b/inference-engine/tools/benchmark_tool/README.md
@@ -145,7 +145,7 @@ If a model has only image input(s), please a provide folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s), which is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
+To run the tool, you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
@@ -213,4 +213,4 @@ Below are fragments of sample output for CPU and FPGA devices:
## See Also
* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
-* [Model Downloader](@ref omz_tools_downloader_README)
+* [Model Downloader](@ref omz_tools_downloader)
diff --git a/tools/benchmark/README.md b/tools/benchmark/README.md
index 215d16bb47a..280b7a0ef53 100644
--- a/tools/benchmark/README.md
+++ b/tools/benchmark/README.md
@@ -151,7 +151,7 @@ If a model has only image input(s), please a provide folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s), which is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
+To run the tool, you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).