Integrate UAT fixes (#5517)

* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Edits to MO

Per findings spreadsheet

* macOS changes

per issue spreadsheet

* Fixes from review spreadsheet

Mostly IE_DG fixes

* Consistency changes

* Make doc fixes from last round of review

* integrate changes from baychub/master

* Update Intro.md

* Update Cutting_Model.md

* Update Cutting_Model.md

* Fixed link to Customize_Model_Optimizer.md

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: baychub <cbay@yahoo.com>
This commit is contained in:
Andrey Zaytsev
2021-05-06 15:37:13 +03:00
committed by GitHub
parent 4790c79eb4
commit 5e4cd1127b
91 changed files with 513 additions and 494 deletions

View File

@@ -10,7 +10,7 @@ In this guide, you will:
[DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a web-based graphical environment that enables you to easily use various sophisticated
OpenVINO™ toolkit components:
* [Model Downloader](@ref omz_tools_downloader) to download models from the [Intel® Open Model Zoo](@ref omz_models_group_intel)
with pretrained models for a range of different tasks
with pre-trained models for a range of different tasks
* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to transform models into
the Intermediate Representation (IR) format
* [Post-Training Optimization toolkit](@ref pot_README) to calibrate a model and then execute it in the
@@ -70,7 +70,7 @@ The simplified OpenVINO™ DL Workbench workflow is:
## Run Baseline Inference
This section illustrates a sample use case of how to infer a pretrained model from the [Intel® Open Model Zoo](@ref omz_models_group_intel) with an autogenerated noise dataset on a CPU device.
This section illustrates a sample use case of how to infer a pre-trained model from the [Intel® Open Model Zoo](@ref omz_models_group_intel) with an autogenerated noise dataset on a CPU device.
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/9TRJwEmY0K4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
@@ -82,7 +82,7 @@ Once you log in to the DL Workbench, create a project, which is a combination of
On the the **Active Projects** page, click **Create** to open the **Create Project** page:
![](./dl_workbench_img/create_configuration.png)
### Step 2. Choose a Pretrained Model
### Step 2. Choose a Pre-trained Model
Click **Import** next to the **Model** table on the **Create Project** page. The **Import Model** page opens. Select the squeezenet1.1 model from the Open Model Zoo and click **Import**.
![](./dl_workbench_img/import_model_02.png)

View File

@@ -94,6 +94,13 @@ The script:
<details>
<summary><strong>Click for an example of running the Image Classification demo script</strong></summary>
To preview the image that the script will classify:
```sh
cd ${INTEL_OPENVINO_DIR}/deployment_tools/demo
eog car.png
```
To run the script to perform inference on a CPU:
```sh
@@ -173,11 +180,12 @@ The script:
<details>
<summary><strong>Click for an example of running the Benchmark demo script</strong></summary>
To run the script that performs inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
To run the script that performs inference (runs on CPU by default):
```sh
./demo_squeezenet_download_convert_run.sh -d HDDL
./demo_benchmark_app.sh
```
When the verification script completes, you see the performance counters, resulting latency, and throughput values displayed on the screen.
</details>
@@ -514,6 +522,24 @@ source /opt/intel/openvino_2021/bin/setupvars.sh
## <a name="syntax-examples"></a> Typical Code Sample and Demo Application Syntax Examples
This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.10 or later installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos_README) pages.
To build all the demos and samples:
```sh
cd $INTEL_OPENVINO_DIR/inference_engine_samples/cpp
# to compile C samples, go here also: cd <INSTALL_DIR>/inference_engine/samples/c
build_samples.sh
cd $INTEL_OPENVINO_DIR/deployment_tools/open_model_zoo/demos
build_demos.sh
```
Depending on what you compiled, executables are in the directories below:
* `~/inference_engine_samples_build/intel64/Release`
* `~/inference_engine_cpp_samples_build/intel64/Release`
* `~/inference_engine_demos_build/intel64/Release`
Template to call sample code or a demo application:
```sh

View File

@@ -95,9 +95,10 @@ The script:
<details>
<summary><strong>Click for an example of running the Image Classification demo script</strong></summary>
To run the script to perform inference on a CPU:
To run the script to view the sample image and perform inference on the CPU:
```sh
open car.png
./demo_squeezenet_download_convert_run.sh
```
@@ -171,7 +172,7 @@ The script:
To run the script that performs inference on a CPU:
```sh
./demo_squeezenet_download_convert_run.sh
./demo_benchmark_app.sh
```
When the verification script completes, you see the performance counters, resulting latency, and throughput values displayed on the screen.
</details>
@@ -210,7 +211,7 @@ You must have a model that is specific for you inference task. Example model typ
- Custom (Often based on SSD)
Options to find a model suitable for the OpenVINO™ toolkit are:
- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader).
- Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using the [Model Downloader tool](@ref omz_tools_downloader).
- Download from GitHub*, Caffe* Zoo, TensorFlow* Zoo, and other resources.
- Train your own model.
@@ -312,6 +313,8 @@ Models in the Intermediate Representation format always include a pair of `.xml`
- **REQUIRED:** `model_name.xml`
- **REQUIRED:** `model_name.bin`
The conversion may also create a `model_name.mapping` file, but it is not needed for running inference.
This guide uses the public SqueezeNet 1.1 Caffe\* model to run the Image Classification Sample. See the example to download a model in the <a href="#download-models">Download Models</a> section to learn how to download this model.
The `squeezenet1.1` model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to the IR.
@@ -376,7 +379,7 @@ To run the **Image Classification** code sample with an input image on the IR:
```
3. Run the code sample executable, specifying the input media file, the IR of your model, and a target device on which you want to perform inference:
```sh
classification_sample_async -i <path_to_media> -m <path_to_model> -d <target_device>
./classification_sample_async -i <path_to_media> -m <path_to_model> -d <target_device>
```
<details>
<summary><strong>Click for examples of running the Image Classification code sample on different devices</strong></summary>
@@ -473,6 +476,24 @@ source /opt/intel/openvino_2021/bin/setupvars.sh
## <a name="syntax-examples"></a> Typical Code Sample and Demo Application Syntax Examples
This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.13 or later installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos_README) pages.
To build all the demos and samples:
```sh
cd $INTEL_OPENVINO_DIR/inference_engine_samples/cpp
# to compile C samples, go here also: cd <INSTALL_DIR>/inference_engine/samples/c
build_samples.sh
cd $INTEL_OPENVINO_DIR/deployment_tools/open_model_zoo/demos
build_demos.sh
```
Depending on what you compiled, executables are in the directories below:
* `~/inference_engine_samples_build/intel64/Release`
* `~/inference_engine_cpp_samples_build/intel64/Release`
* `~/inference_engine_demos_build/intel64/Release`
Template to call sample code or a demo application:
```sh
@@ -482,8 +503,8 @@ Template to call sample code or a demo application:
With the sample information specified, the command might look like this:
```sh
./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU
cd $INTEL_OPENVINO_DIR/deployment_tools/open_model_zoo/demos/object_detection_demo
./object_detection_demo -i ~/Videos/catshow.mp4 -m ~/ir/fp32/mobilenet-ssd.xml -d CPU
```
## <a name="advanced-samples"></a> Advanced Demo Use

View File

@@ -96,6 +96,8 @@ The script:
To run the script to perform inference on a CPU:
1. Open the `car.png` file in any image viewer to see what the demo will be classifying.
2. Run the following script:
```bat
.\demo_squeezenet_download_convert_run.bat
```
@@ -167,10 +169,10 @@ The script:
<details>
<summary><strong>Click for an example of running the Benchmark demo script</strong></summary>
To run the script that performs inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
To run the script that performs inference (runs on CPU by default):
```bat
.\demo_squeezenet_download_convert_run.bat -d HDDL
.\demo_benchmark_app.bat
```
When the verification script completes, you see the performance counters, resulting latency, and throughput values displayed on the screen.
</details>
@@ -482,6 +484,24 @@ Below you can find basic guidelines for executing the OpenVINO™ workflow using
## <a name="syntax-examples"></a> Typical Code Sample and Demo Application Syntax Examples
This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.10 or later and Microsoft Visual Studio 2017 or 2019 installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos_README) pages.
To build all the demos and samples:
```sh
cd $INTEL_OPENVINO_DIR\inference_engine_samples\cpp
# to compile C samples, go here also: cd <INSTALL_DIR>\inference_engine\samples\c
build_samples_msvc.bat
cd $INTEL_OPENVINO_DIR\deployment_tools\open_model_zoo\demos
build_demos_msvc.bat
```
Depending on what you compiled, executables are in the directories below:
* `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_c_samples_build\intel64\Release`
* `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release`
* `C:\Users\<username>\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release`
Template to call sample code or a demo application:
```bat