[DOC] Update Docker install guide (#3055)

* [DOC] Update Docker install guide

* [DOC] Add proxy for Windows Docker install guide

* [DOC] move up prebuilt images section

* Update installing-openvino-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

Formatting fixes

* Update installing-openvino-docker-linux.md

Fixed formatting issues

* Update installing-openvino-docker-windows.md

Minor fixes

* Update installing-openvino-docker-linux.md

Fixed formatting issues

* [DOC] update text with CPU image, remove proxy for win

* Update installing-openvino-docker-windows.md

Minor fixes

* Update installing-openvino-docker-windows.md

Minor fix

* Update installing-openvino-docker-windows.md

Minor fix

* Update installing-openvino-docker-windows.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
This commit is contained in:
Kate Generalova 2020-11-17 16:43:56 +03:00 committed by GitHub
parent 5bc74aac75
commit 4a09888ef4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 215 additions and 262 deletions

View File

@ -9,11 +9,17 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
**Target Operating Systems** **Target Operating Systems**
- Ubuntu\* 18.04 long-term support (LTS), 64-bit - Ubuntu\* 18.04 long-term support (LTS), 64-bit
- Ubuntu\* 20.04 long-term support (LTS), 64-bit
- CentOS\* 7.6
**Host Operating Systems** **Host Operating Systems**
- Linux with installed GPU driver and with Linux kernel supported by GPU driver - Linux with installed GPU driver and with Linux kernel supported by GPU driver
## Prebuilt images
Prebuilt images are available on [Docker Hub](https://hub.docker.com/u/openvino).
## Use Docker* Image for CPU ## Use Docker* Image for CPU
- Kernel reports the same information for all containers as for native application, for example, CPU, memory information. - Kernel reports the same information for all containers as for native application, for example, CPU, memory information.
@ -22,127 +28,14 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
### <a name="building-for-cpu"></a>Build a Docker* Image for CPU ### <a name="building-for-cpu"></a>Build a Docker* Image for CPU
To build a Docker image, create a `Dockerfile` that contains defined variables and commands required to create an OpenVINO toolkit installation image. You can use [available Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles) or generate a Dockerfile with your setting via [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit.
The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
Create your `Dockerfile` using the following example as a template:
<details>
<summary>Click to expand/collapse</summary>
```sh
FROM ubuntu:18.04
USER root
WORKDIR /
SHELL ["/bin/bash", "-xo", "pipefail", "-c"]
# Creating user openvino
RUN useradd -ms /bin/bash openvino && \
chown openvino -R /home/openvino
ARG DEPENDENCIES="autoconf \
automake \
build-essential \
cmake \
cpio \
curl \
gnupg2 \
libdrm2 \
libglib2.0-0 \
lsb-release \
libgtk-3-0 \
libtool \
udev \
unzip \
dos2unix"
RUN apt-get update && \
apt-get install -y --no-install-recommends ${DEPENDENCIES} && \
rm -rf /var/lib/apt/lists/*
WORKDIR /thirdparty
RUN sed -Ei 's/# deb-src /deb-src /' /etc/apt/sources.list && \
apt-get update && \
apt-get source ${DEPENDENCIES} && \
rm -rf /var/lib/apt/lists/*
# setup Python
ENV PYTHON python3.6
RUN apt-get update && \
apt-get install -y --no-install-recommends python3-pip python3-dev lib${PYTHON}=3.6.9-1~18.04 && \
rm -rf /var/lib/apt/lists/*
ARG package_url=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_0000.0.000.tgz
ARG TEMP_DIR=/tmp/openvino_installer
WORKDIR ${TEMP_DIR}
ADD ${package_url} ${TEMP_DIR}
# install product by installation script
ENV INTEL_OPENVINO_DIR /opt/intel/openvino
RUN tar -xzf ${TEMP_DIR}/*.tgz --strip 1
RUN sed -i 's/decline/accept/g' silent.cfg && \
${TEMP_DIR}/install.sh -s silent.cfg && \
${INTEL_OPENVINO_DIR}/install_dependencies/install_openvino_dependencies.sh
WORKDIR /tmp
RUN rm -rf ${TEMP_DIR}
# installing dependencies for package
WORKDIR /tmp
RUN ${PYTHON} -m pip install --no-cache-dir setuptools && \
find "${INTEL_OPENVINO_DIR}/" -type f -name "*requirements*.*" -path "*/${PYTHON}/*" -exec ${PYTHON} -m pip install --no-cache-dir -r "{}" \; && \
find "${INTEL_OPENVINO_DIR}/" -type f -name "*requirements*.*" -not -path "*/post_training_optimization_toolkit/*" -not -name "*windows.txt" -not -name "*ubuntu16.txt" -not -path "*/python3*/*" -not -path "*/python2*/*" -exec ${PYTHON} -m pip install --no-cache-dir -r "{}" \;
WORKDIR ${INTEL_OPENVINO_DIR}/deployment_tools/open_model_zoo/tools/accuracy_checker
RUN source ${INTEL_OPENVINO_DIR}/bin/setupvars.sh && \
${PYTHON} -m pip install --no-cache-dir -r ${INTEL_OPENVINO_DIR}/deployment_tools/open_model_zoo/tools/accuracy_checker/requirements.in && \
${PYTHON} ${INTEL_OPENVINO_DIR}/deployment_tools/open_model_zoo/tools/accuracy_checker/setup.py install
WORKDIR ${INTEL_OPENVINO_DIR}/deployment_tools/tools/post_training_optimization_toolkit
RUN if [ -f requirements.txt ]; then \
${PYTHON} -m pip install --no-cache-dir -r ${INTEL_OPENVINO_DIR}/deployment_tools/tools/post_training_optimization_toolkit/requirements.txt && \
${PYTHON} ${INTEL_OPENVINO_DIR}/deployment_tools/tools/post_training_optimization_toolkit/setup.py install; \
fi;
# Post-installation cleanup and setting up OpenVINO environment variables
RUN if [ -f "${INTEL_OPENVINO_DIR}"/bin/setupvars.sh ]; then \
printf "\nsource \${INTEL_OPENVINO_DIR}/bin/setupvars.sh\n" >> /home/openvino/.bashrc; \
printf "\nsource \${INTEL_OPENVINO_DIR}/bin/setupvars.sh\n" >> /root/.bashrc; \
fi;
RUN find "${INTEL_OPENVINO_DIR}/" -name "*.*sh" -type f -exec dos2unix {} \;
USER openvino
WORKDIR ${INTEL_OPENVINO_DIR}
CMD ["/bin/bash"]
```
</details>
> **NOTE**: Please replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the `package_url` argument. You can copy the link from the [Intel® Distribution of OpenVINO™ toolkit download page](https://software.seek.intel.com/openvino-toolkit) after registration. Right click on **Offline Installer** button on the download page for Linux in your browser and press **Copy link address**.
You can select which OpenVINO components will be installed by modifying `COMPONENTS` parameter in the `silent.cfg` file. For example to install only CPU runtime for the Inference Engine, set
`COMPONENTS=intel-openvino-ie-rt-cpu__x86_64` in `silent.cfg`.
To get a full list of available components for installation, run the `./install.sh --list_components` command from the unpacked OpenVINO™ toolkit package.
To build a Docker* image for CPU, run the following command:
```sh
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
```
### Run the Docker* Image for CPU ### Run the Docker* Image for CPU
Run the image with the following command: Run the image with the following command:
```sh ```sh
docker run -it <image_name> docker run -it --rm <image_name>
``` ```
## Use a Docker* Image for GPU ## Use a Docker* Image for GPU
### Build a Docker* Image for GPU ### Build a Docker* Image for GPU
@ -153,8 +46,9 @@ docker run -it <image_name>
- Intel® OpenCL™ runtime package must be included into the container. - Intel® OpenCL™ runtime package must be included into the container.
- In the container, user must be in the `video` group. - In the container, user must be in the `video` group.
Before building a Docker* image on GPU, add the following commands to the `Dockerfile` example for CPU above: Before building a Docker* image on GPU, add the following commands to a Dockerfile:
**Ubuntu 18.04/20.04**:
```sh ```sh
WORKDIR /tmp/opencl WORKDIR /tmp/opencl
RUN usermod -aG video openvino RUN usermod -aG video openvino
@ -170,28 +64,36 @@ RUN apt-get update && \
ldconfig && \ ldconfig && \
rm /tmp/opencl rm /tmp/opencl
``` ```
**CentOS 7.6**:
To build a Docker* image for GPU, run the following command:
```sh ```sh
docker build . -t <image_name> \ WORKDIR /tmp/opencl
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \ RUN groupmod -g 44 video
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
RUN yum update -y && yum install -y epel-release && \
yum update -y && yum install -y ocl-icd ocl-icd-devel && \
yum clean all && rm -rf /var/cache/yum && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-gmmlib-19.3.2-1.el7.x86_64.rpm/download -o intel-gmmlib-19.3.2-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm/download -o intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-core-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-core-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-opencl-19.41.14441-1.el7.x86_64.rpm/download -o intel-opencl-19.41.14441-1.el7.x86_64.rpm \
rpm -ivh ${TEMP_DIR}/*.rpm && \
ldconfig && \
rm -rf ${TEMP_DIR} && \
yum remove -y epel-release
``` ```
### Run the Docker* Image for GPU ### Run the Docker* Image for GPU
To make GPU available in the container, attach the GPU to the container using `--device /dev/dri` option and run the container: To make GPU available in the container, attach the GPU to the container using `--device /dev/dri` option and run the container:
```sh ```sh
docker run -it --device /dev/dri <image_name> docker run -it --rm --device /dev/dri <image_name>
``` ```
## Use a Docker* Image for Intel® Neural Compute Stick 2 ## Use a Docker* Image for Intel® Neural Compute Stick 2
### Build a Docker* Image for Intel® Neural Compute Stick 2 ### Build and Run the Docker* Image for Intel® Neural Compute Stick 2
Build a Docker image using the same steps as for CPU.
### Run the Docker* Image for Intel® Neural Compute Stick 2
**Known limitations:** **Known limitations:**
@ -199,12 +101,24 @@ Build a Docker image using the same steps as for CPU.
- UDEV events are not forwarded to the container by default it does not know about device reconnection. - UDEV events are not forwarded to the container by default it does not know about device reconnection.
- Only one device per host is supported. - Only one device per host is supported.
Use one of the following options to run **Possible solutions for Intel® Neural Compute Stick 2:** Use one of the following options as **Possible solutions for Intel® Neural Compute Stick 2:**
- **Solution #1**: #### Option #1
1. Get rid of UDEV by rebuilding `libusb` without UDEV support in the Docker* image (add the following commands to the `Dockerfile` example for CPU above):<br> 1. Get rid of UDEV by rebuilding `libusb` without UDEV support in the Docker* image (add the following commands to a `Dockerfile`):
- **Ubuntu 18.04/20.04**:
```sh ```sh
ARG BUILD_DEPENDENCIES="autoconf \
automake \
build-essential \
libtool \
unzip \
udev"
RUN apt-get update && \
apt-get install -y --no-install-recommends ${BUILD_DEPENDENCIES} && \
rm -rf /var/lib/apt/lists/*
RUN usermod -aG users openvino RUN usermod -aG users openvino
WORKDIR /opt WORKDIR /opt
RUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip && \ RUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip && \
unzip v1.0.22.zip unzip v1.0.22.zip
@ -213,9 +127,6 @@ WORKDIR /opt/libusb-1.0.22
RUN ./bootstrap.sh && \ RUN ./bootstrap.sh && \
./configure --disable-udev --enable-shared && \ ./configure --disable-udev --enable-shared && \
make -j4 make -j4
RUN apt-get update && \
apt-get install -y --no-install-recommends libusb-1.0-0-dev=2:1.0.21-2 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /opt/libusb-1.0.22/libusb WORKDIR /opt/libusb-1.0.22/libusb
RUN /bin/mkdir -p '/usr/local/lib' && \ RUN /bin/mkdir -p '/usr/local/lib' && \
@ -226,39 +137,103 @@ RUN /bin/mkdir -p '/usr/local/lib' && \
WORKDIR /opt/libusb-1.0.22/ WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \ RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig ldconfig
``` ```
- **CentOS 7.6**:
2. Run the Docker* image:<br>
```sh ```sh
docker run --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name> ARG BUILD_DEPENDENCIES="autoconf \
automake \
libtool \
unzip \
udev"
# hadolint ignore=DL3031, DL3033
RUN yum update -y && yum install -y ${BUILD_DEPENDENCIES} && \
yum group install -y "Development Tools" && \
yum clean all && rm -rf /var/cache/yum
WORKDIR /opt
RUN curl -L https://github.com/libusb/libusb/archive/v1.0.22.zip --output v1.0.22.zip && \
unzip v1.0.22.zip && rm -rf v1.0.22.zip
WORKDIR /opt/libusb-1.0.22
RUN ./bootstrap.sh && \
./configure --disable-udev --enable-shared && \
make -j4
WORKDIR /opt/libusb-1.0.22/libusb
RUN /bin/mkdir -p '/usr/local/lib' && \
/bin/bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la '/usr/local/lib' && \
/bin/mkdir -p '/usr/local/include/libusb-1.0' && \
/usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' && \
/bin/mkdir -p '/usr/local/lib/pkgconfig' && \
printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino/bin/setupvars.sh
WORKDIR /opt/libusb-1.0.22/
RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \
cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \
ldconfig
``` ```
<br> 2. Run the Docker* image:
- **Solution #2**:
Run container in privileged mode, enable Docker network configuration as host, and mount all devices to container:<br>
```sh ```sh
docker run --privileged -v /dev:/dev --network=host <image_name> docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
``` ```
> **Notes**: #### Option #2
> - It is not secure Run container in the privileged mode, enable the Docker network configuration as host, and mount all devices to the container:
> - Conflicts with Kubernetes* and other tools that use orchestration and private networks ```sh
docker run -it --rm --privileged -v /dev:/dev --network=host <image_name>
```
> **NOTES**:
> - It is not secure.
> - Conflicts with Kubernetes* and other tools that use orchestration and private networks may occur.
## Use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs ## Use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
### Build Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs ### Build Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs: To use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
1. Set up the environment on the host machine, that is going to be used for running Docker*. It is required to execute `hddldaemon`, which is responsible for communication between the HDDL plugin and the board. To learn how to set up the environment (the OpenVINO package must be pre-installed), see [Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](installing-openvino-linux-ivad-vpu.md). 1. Set up the environment on the host machine, that is going to be used for running Docker*.
2. Prepare the Docker* image. As a base image, you can use the image from the section [Building Docker Image for CPU](#building-for-cpu). To use it for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs you need to rebuild the image with adding the following dependencies: It is required to execute `hddldaemon`, which is responsible for communication between the HDDL plugin and the board.
To learn how to set up the environment (the OpenVINO package or HDDL package must be pre-installed), see [Configuration guide for HDDL device](https://github.com/openvinotoolkit/docker_ci/blob/master/install_guide_vpu_hddl.md) or [Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](installing-openvino-linux-ivad-vpu.md).
2. Prepare the Docker* image (add the following commands to a Dockerfile).
- **Ubuntu 18.04**:
```sh ```sh
WORKDIR /tmp
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y --no-install-recommends \ apt-get install -y --no-install-recommends \
libboost-filesystem1.65-dev=1.65.1+dfsg-0ubuntu5 \ libboost-filesystem1.65-dev \
libboost-thread1.65-dev=1.65.1+dfsg-0ubuntu5 \ libboost-thread1.65-dev \
libjson-c3=0.12.1-1.3 libxxf86vm-dev=1:1.1.4-1 && \ libjson-c3 libxxf86vm-dev && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*
```
- **Ubuntu 20.04**:
```sh
WORKDIR /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libboost-filesystem-dev \
libboost-thread-dev \
libjson-c4 \
libxxf86vm-dev && \
rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*
```
- **CentOS 7.6**:
```sh
WORKDIR /tmp
RUN yum update -y && yum install -y \
boost-filesystem \
boost-thread \
boost-program-options \
boost-system \
boost-chrono \
boost-date-time \
boost-regex \
boost-atomic \
json-c \
libXxf86vm-devel && \
yum clean all && rm -rf /var/cache/yum
``` ```
3. Run `hddldaemon` on the host in a separate terminal session using the following command: 3. Run `hddldaemon` on the host in a separate terminal session using the following command:
```sh ```sh
@ -268,22 +243,50 @@ $HDDL_INSTALL_DIR/hddldaemon
### Run the Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs ### Run the Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To run the built Docker* image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, use the following command: To run the built Docker* image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, use the following command:
```sh ```sh
docker run --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp -ti <image_name> docker run -it --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp <image_name>
``` ```
> **NOTE**: > **NOTES**:
> - The device `/dev/ion` need to be shared to be able to use ion buffers among the plugin, `hddldaemon` and the kernel. > - The device `/dev/ion` need to be shared to be able to use ion buffers among the plugin, `hddldaemon` and the kernel.
> - Since separate inference tasks share the same HDDL service communication interface (the service creates mutexes and a socket file in `/var/tmp`), `/var/tmp` needs to be mounted and shared among them. > - Since separate inference tasks share the same HDDL service communication interface (the service creates mutexes and a socket file in `/var/tmp`), `/var/tmp` needs to be mounted and shared among them.
In some cases, the ion driver is not enabled (for example, due to a newer kernel version or iommu incompatibility). `lsmod | grep myd_ion` returns empty output. To resolve, use the following command: In some cases, the ion driver is not enabled (for example, due to a newer kernel version or iommu incompatibility). `lsmod | grep myd_ion` returns empty output. To resolve, use the following command:
```sh ```sh
docker run --rm --net=host -v /var/tmp:/var/tmp ipc=host -ti <image_name> docker run -it --rm --net=host -v /var/tmp:/var/tmp ipc=host <image_name>
``` ```
> **NOTE**: > **NOTES**:
> - When building docker images, create a user in the docker file that has the same UID and GID as the user which runs hddldaemon on the host. > - When building docker images, create a user in the docker file that has the same UID and GID as the user which runs hddldaemon on the host.
> - Run the application in the docker with this user. > - Run the application in the docker with this user.
> - Alternatively, you can start hddldaemon with the root user on host, but this approach is not recommended. > - Alternatively, you can start hddldaemon with the root user on host, but this approach is not recommended.
### Run Demos in the Docker* Image
To run the Security Barrier Camera Demo on a specific inference device, run the following commands with the root privileges (additional third-party dependencies will be installed):
**CPU**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d CPU -sample-options -no_show"
```
**GPU**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d GPU -sample-options -no_show"
```
**MYRIAD**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d MYRIAD -sample-options -no_show"
```
**HDDL**:
```sh
docker run -itu root:root --rm --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp --device /dev/dri:/dev/dri --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb <image_name>
/bin/bash -c "apt update && apt install sudo && deployment_tools/demo/demo_security_barrier_camera.sh -d HDDL -sample-options -no_show"
```
## Use a Docker* Image for FPGA ## Use a Docker* Image for FPGA
Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
@ -292,6 +295,10 @@ Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue t
For instructions for previous releases with FPGA Support, see documentation for the [2020.4 version](https://docs.openvinotoolkit.org/2020.4/openvino_docs_install_guides_installing_openvino_docker_linux.html#use_a_docker_image_for_fpga) or lower. For instructions for previous releases with FPGA Support, see documentation for the [2020.4 version](https://docs.openvinotoolkit.org/2020.4/openvino_docs_install_guides_installing_openvino_docker_linux.html#use_a_docker_image_for_fpga) or lower.
## Troubleshooting
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) topic.
## Additional Resources ## Additional Resources
* [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs. * [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.

View File

@ -15,134 +15,71 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
- Windows 10*, 64-bit Pro, Enterprise or Education (1607 Anniversary Update, Build 14393 or later) editions - Windows 10*, 64-bit Pro, Enterprise or Education (1607 Anniversary Update, Build 14393 or later) editions
- Windows Server* 2016 or higher - Windows Server* 2016 or higher
## Prebuilt Images
Prebuilt images are available on [Docker Hub](https://hub.docker.com/u/openvino).
## Build a Docker* Image for CPU ## Build a Docker* Image for CPU
To build a Docker image, create a `Dockerfile` that contains defined variables and commands required to create an OpenVINO toolkit installation image. You can use [available Dockerfiles](https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles) or generate a Dockerfile with your setting via [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit.
The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
Create your `Dockerfile` using the following example as a template: ## Install Additional Dependencies
<details>
<summary>Click to expand/collapse</summary>
~~~
# escape= `
FROM mcr.microsoft.com/windows/servercore:ltsc2019
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
USER ContainerAdministrator
# Setup Redistributable Libraries for Intel(R) C++ Compiler for Windows*
RUN powershell.exe -Command `
Invoke-WebRequest -URI https://software.intel.com/sites/default/files/managed/59/aa/ww_icl_redist_msi_2018.3.210.zip -Proxy %HTTPS_PROXY% -OutFile "%TMP%\ww_icl_redist_msi_2018.3.210.zip" ; `
Expand-Archive -Path "%TMP%\ww_icl_redist_msi_2018.3.210.zip" -DestinationPath "%TMP%\ww_icl_redist_msi_2018.3.210" -Force ; `
Remove-Item "%TMP%\ww_icl_redist_msi_2018.3.210.zip" -Force
RUN %TMP%\ww_icl_redist_msi_2018.3.210\ww_icl_redist_intel64_2018.3.210.msi /quiet /passive /log "%TMP%\redist.log"
# setup Python
ARG PYTHON_VER=python3.7
RUN powershell.exe -Command `
Invoke-WebRequest -URI https://www.python.org/ftp/python/3.7.6/python-3.7.6-amd64.exe -Proxy %HTTPS_PROXY% -OutFile %TMP%\\python-3.7.exe ; `
Start-Process %TMP%\\python-3.7.exe -ArgumentList '/passive InstallAllUsers=1 PrependPath=1 TargetDir=c:\\Python37' -Wait ; `
Remove-Item %TMP%\\python-3.7.exe -Force
RUN python -m pip install --upgrade pip
RUN python -m pip install cmake
# download package from external URL
ARG package_url=http://registrationcenter-download.intel.com/akdlm/irc_nas/16613/w_openvino_toolkit_p_0000.0.000.exe
ARG TEMP_DIR=/temp
WORKDIR ${TEMP_DIR}
ADD ${package_url} ${TEMP_DIR}
# install product by installation script
ARG build_id=0000.0.000
ENV INTEL_OPENVINO_DIR C:\intel
RUN powershell.exe -Command `
Start-Process "./*.exe" -ArgumentList '--s --a install --eula=accept --installdir=%INTEL_OPENVINO_DIR% --output=%TMP%\openvino_install_out.log --components=OPENVINO_COMMON,INFERENCE_ENGINE,INFERENCE_ENGINE_SDK,INFERENCE_ENGINE_SAMPLES,OMZ_TOOLS,POT,INFERENCE_ENGINE_CPU,INFERENCE_ENGINE_GPU,MODEL_OPTIMIZER,OMZ_DEV,OPENCV_PYTHON,OPENCV_RUNTIME,OPENCV,DOCS,SETUPVARS,VC_REDIST_2017_X64,icl_redist' -Wait
ENV INTEL_OPENVINO_DIR C:\intel\openvino_${build_id}
# Post-installation cleanup
RUN rmdir /S /Q "%USERPROFILE%\Downloads\Intel"
# dev package
WORKDIR ${INTEL_OPENVINO_DIR}
RUN python -m pip install --no-cache-dir setuptools && `
python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\python\%PYTHON_VER%\requirements.txt" && `
python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\python\%PYTHON_VER%\openvino\tools\benchmark\requirements.txt" && `
python -m pip install --no-cache-dir torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
WORKDIR ${TEMP_DIR}
COPY scripts\install_requirements.bat install_requirements.bat
RUN install_requirements.bat %INTEL_OPENVINO_DIR%
WORKDIR ${INTEL_OPENVINO_DIR}\deployment_tools\open_model_zoo\tools\accuracy_checker
RUN %INTEL_OPENVINO_DIR%\bin\setupvars.bat && `
python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\deployment_tools\open_model_zoo\tools\accuracy_checker\requirements.in" && `
python "%INTEL_OPENVINO_DIR%\deployment_tools\open_model_zoo\tools\accuracy_checker\setup.py" install
WORKDIR ${INTEL_OPENVINO_DIR}\deployment_tools\tools\post_training_optimization_toolkit
RUN python -m pip install --no-cache-dir -r "%INTEL_OPENVINO_DIR%\deployment_tools\tools\post_training_optimization_toolkit\requirements.txt" && `
python "%INTEL_OPENVINO_DIR%\deployment_tools\tools\post_training_optimization_toolkit\setup.py" install
WORKDIR ${INTEL_OPENVINO_DIR}
# Post-installation cleanup
RUN powershell Remove-Item -Force -Recurse "%TEMP%\*" && `
powershell Remove-Item -Force -Recurse "%TEMP_DIR%" && `
rmdir /S /Q "%ProgramData%\Package Cache"
USER ContainerUser
CMD ["cmd.exe"]
~~~
</details>
> **NOTE**: Replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the `package_url` variable and modify install package name in the subsequent commands. You can copy the link from the [Intel® Distribution of OpenVINO™ toolkit download page](https://software.seek.intel.com/openvino-toolkit) after registration. Right click the **Offline Installer** button on the download page for Linux in your browser and press **Copy link address**.
> **NOTE**: Replace build number of the package in the `build_id` variable according to the name of the downloaded Intel® Distribution of OpenVINO™ toolkit package. For example, for the installation file `w_openvino_toolkit_p_2020.3.333.exe`, the `build_id` variable should have the value `2020.3.333`.
To build a Docker* image for CPU, run the following command:
~~~
docker build . -t <image_name> `
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> `
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
~~~
## Install additional dependencies
### Install CMake ### Install CMake
To add CMake to the image, add the following commands to the `Dockerfile` example above: To add CMake to the image, add the following commands to the Dockerfile:
~~~ ~~~
RUN powershell.exe -Command ` RUN powershell.exe -Command `
Invoke-WebRequest -URI https://cmake.org/files/v3.14/cmake-3.14.7-win64-x64.msi -Proxy %HTTPS_PROXY% -OutFile %TMP%\\cmake-3.14.7-win64-x64.msi ; ` Invoke-WebRequest -URI https://cmake.org/files/v3.14/cmake-3.14.7-win64-x64.msi -OutFile %TMP%\\cmake-3.14.7-win64-x64.msi ; `
Start-Process %TMP%\\cmake-3.14.7-win64-x64.msi -ArgumentList '/quiet /norestart' -Wait ; ` Start-Process %TMP%\\cmake-3.14.7-win64-x64.msi -ArgumentList '/quiet /norestart' -Wait ; `
Remove-Item %TMP%\\cmake-3.14.7-win64-x64.msi -Force Remove-Item %TMP%\\cmake-3.14.7-win64-x64.msi -Force
RUN SETX /M PATH "C:\Program Files\CMake\Bin;%PATH%" RUN SETX /M PATH "C:\Program Files\CMake\Bin;%PATH%"
~~~ ~~~
In case of proxy issues, please add the `ARG HTTPS_PROXY` and `-Proxy %%HTTPS_PROXY%` settings to the `powershell.exe` command to the Dockerfile. Then build a docker image:
~~~
docker build . -t <image_name> `
--build-arg HTTPS_PROXY=<https://your_proxy_server:port>
~~~
### Install Microsoft Visual Studio* Build Tools ### Install Microsoft Visual Studio* Build Tools
You can add Microsoft Visual Studio Build Tools* to Windows* OS Docker image. Available options are to use offline installer for Build Tools You can add Microsoft Visual Studio Build Tools* to a Windows* OS Docker image. Available options are to use offline installer for Build Tools
(follow [Instruction for the offline installer](https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio?view=vs-2019) or (follow the [Instruction for the offline installer](https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio?view=vs-2019)) or
to use online installer for Build Tools (follow [Instruction for the online installer](https://docs.microsoft.com/en-us/visualstudio/install/build-tools-container?view=vs-2019). to use the online installer for Build Tools (follow [Instruction for the online installer](https://docs.microsoft.com/en-us/visualstudio/install/build-tools-container?view=vs-2019)).
Microsoft Visual Studio Build Tools* are licensed as a supplement your existing Microsoft Visual Studio* license. Microsoft Visual Studio Build Tools* are licensed as a supplement your existing Microsoft Visual Studio* license.
Any images built with these tools should be for your personal use or for use in your organization in accordance with your existing Visual Studio* and Windows* licenses. Any images built with these tools should be for your personal use or for use in your organization in accordance with your existing Visual Studio* and Windows* licenses.
To add MSBuild 2019 to the image, add the following commands to the Dockerfile:
~~~
RUN powershell.exe -Command Invoke-WebRequest -URI https://aka.ms/vs/16/release/vs_buildtools.exe -OutFile %TMP%\\vs_buildtools.exe
RUN %TMP%\\vs_buildtools.exe --quiet --norestart --wait --nocache `
--installPath "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools" `
--add Microsoft.VisualStudio.Workload.MSBuildTools `
--add Microsoft.VisualStudio.Workload.UniversalBuildTools `
--add Microsoft.VisualStudio.Workload.VCTools --includeRecommended `
--remove Microsoft.VisualStudio.Component.Windows10SDK.10240 `
--remove Microsoft.VisualStudio.Component.Windows10SDK.10586 `
--remove Microsoft.VisualStudio.Component.Windows10SDK.14393 `
--remove Microsoft.VisualStudio.Component.Windows81SDK || IF "%ERRORLEVEL%"=="3010" EXIT 0 && powershell set-executionpolicy remotesigned
~~~
In case of proxy issues, please use an offline installer for Build Tools (follow [Instruction for the offline installer](https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio?view=vs-2019).
## Run the Docker* Image for CPU ## Run the Docker* Image for CPU
To install the OpenVINO toolkit from the prepared Docker image, run the image with the following command: To install the OpenVINO toolkit from the prepared Docker image, run the image with the following command (currently support only CPU target):
~~~ ~~~
docker run -it <image_name> docker run -it --rm <image_name>
~~~ ~~~
If you want to try some demos then run image with the root privileges (some additional 3-rd party dependencies will be installed):
~~~
docker run -itu ContainerAdministrator --rm <image_name> cmd /S /C "cd deployment_tools\demo && demo_security_barrier_camera.bat -d CPU -sample-options -no_show"
~~~
## Troubleshooting
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) topic.
## Additional Resources ## Additional Resources
* [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs. * [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.

View File

@ -116,6 +116,15 @@ sudo ./install_GUI.sh
```sh ```sh
sudo ./install.sh sudo ./install.sh
``` ```
- **Option 3:** Command-Line Silent Instructions:
```sh
sudo sed -i 's/decline/accept/g' silent.cfg
sudo ./install.sh -s silent.cfg
```
You can select which OpenVINO components will be installed by modifying the `COMPONENTS` parameter in the `silent.cfg` file. For example, to install only CPU runtime for the Inference Engine, set
`COMPONENTS=intel-openvino-ie-rt-cpu__x86_64` in `silent.cfg`.
To get a full list of available components for installation, run the `./install.sh --list_components` command from the unpacked OpenVINO™ toolkit package.
6. Follow the instructions on your screen. Watch for informational 6. Follow the instructions on your screen. Watch for informational
messages such as the following in case you must complete additional messages such as the following in case you must complete additional
steps: steps: