The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit.
This guide provides the steps for creating a Docker* image with Intel® Distribution of OpenVINO™ toolkit for Linux* and further installation.
System Requirements
Target Operating Systems
- Ubuntu* 16.04 long-term support (LTS), 64-bit
- CentOS* 7.4, 64-bit
Host Operating Systems
- Linux with installed GPU driver and with Linux kernel supported by GPU driver
Use Docker* Image for CPU
- Kernel reports the same information for all containers as for native application, for example, CPU, memory information.
- All instructions that are available to host process available for process in container, including, for example, AVX2, AVX512. No restrictions.
- Docker* does not use virtualization or emulation. The process in Docker* is just a regular Linux process, but it is isolated from external world on kernel level. Performance penalty is small.
Build a Docker* Image for CPU
To build a Docker image, create a Dockerfile that contains defined variables and commands required to create an OpenVINO toolkit installation image.
Create your Dockerfile using the following example as a template:
FROM ubuntu:16.04
ENV http_proxy $HTTP_PROXY
ENV https_proxy $HTTPS_PROXY
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/13231/l_openvino_toolkit_p_2019.0.000.tgz
ARG INSTALL_DIR=/opt/intel/openvino
ARG TEMP_DIR=/tmp/openvino_installer
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
cpio \
sudo \
lsb-release && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p $TEMP_DIR && cd $TEMP_DIR && \
wget -c $DOWNLOAD_LINK && \
tar xf l_openvino_toolkit*.tgz && \
cd l_openvino_toolkit* && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
rm -rf $TEMP_DIR
RUN $INSTALL_DIR/install_dependencies/install_openvino_dependencies.sh
# build Inference Engine samples
RUN mkdir $INSTALL_DIR/deployment_tools/inference_engine/samples/build && cd $INSTALL_DIR/deployment_tools/inference_engine/samples/build && \
/bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh && cmake .. && make -j1"
NOTE: Please replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the DOWNLOAD_LINK variable. You can copy the link from the Intel® Distribution of OpenVINO™ toolkit download page https://software.seek.intel.com/openvino-toolkit after registration. Right click on Offline Installer button on the download page for Linux in your browser and press Copy link address.
You can select which OpenVINO components will be installed by modifying COMPONENTS parameter in the silent.cfg file. For example to install only CPU runtime for the Inference Engine, set COMPONENTS=intel-openvino-ie-rt-cpu__x86_64 in silent.cfg.
To get a full list of available components for installation, run the ./install.sh --list_components command from the unpacked OpenVINO™ toolkit package.
To build a Docker* image for CPU, run the following command:
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
Run the Docker* Image for CPU
To install the OpenVINO toolkit from the prepared Docker image, run the image with the following command:
docker run -it <image_name>
Use a Docker* Image for GPU
Build a Docker* Image for GPU
Prerequisites:
- GPU is not available in container by default, you must attach it to the container.
- Kernel driver must be installed on the host.
- Intel® OpenCL™ runtime package must be included into the container.
- In the container, user must be in the
video group.
Before building a Docker* image on GPU, add the following commands to the Dockerfile example for CPU above:
COPY intel-opencl*.deb /opt/gfx/
RUN cd /opt/gfx && \
dpkg -i intel-opencl*.deb && \
ldconfig && \
rm -rf /opt/gfx
RUN useradd -G video -ms /bin/bash user
USER user
To build a Docker* image for GPU:
- Copy Intel® OpenCL™ driver for Ubuntu* (
intel-opencl*.deb) from <OPENVINO_INSTALL_DIR>/install_dependencies to the folder with the Dockerfile.
- Run the following command to build a Docker* image:
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
Run the Docker* Image for GPU
To make GPU available in the container, attach the GPU to the container using --device /dev/dri option and run the container:
docker run -it –device /dev/dri <image_name>
Use a Docker* Image for Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2
Build a Docker* Image for Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2
Build a Docker image using the same steps as for CPU.
Run the Docker* Image for Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2
Known limitations:
- Intel® Movidius™ Neural Compute Stick device changes its VendorID and DeviceID during execution and each time looks for a host system as a brand new device. It means it cannot be mounted as usual.
- UDEV events are not forwarded to the container by default it does not know about device reconnection.
- Only one device per host is supported.
Use one of the following options to run Possible solutions for Intel Movidius Neural Compute Stick:
- Solution #1: Run container in privileged mode, enable Docker network configuration as host, and mount all devices to container:
docker run --privileged –v /dev:/dev –network=host <image_name>
Notes:
- It is not secure
- Conflicts with Kubernetes* and other tools that use orchestration and private networks
- Solution #2:
- Get rid of UDEV by rebuilding
libusb without UDEV support in the Docker* image:
RUN cd /tmp/ && \
wget https://github.com/libusb/libusb/archive/v1.0.22.zip && \
unzip v1.0.22.zip && cd libusb-1.0.22 && \
./bootstrap.sh && \
./configure --disable-udev --enable-shared && \
make -j4 && make install && \
rm -rf /tmp/*
- Run the Docker* image in privileged mode:
docker run --privileged –v /dev:/dev <image_name>
NOTES:
- It is not secure
- No conflicts with Kubernetes*
Use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Build Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
- Set up the environment on the host machine, that is going to be used for running Docker*. It is required to execute
hddldaemon, which is responsible for communication between the HDDL plugin and the board. To learn how to set up the environment (the OpenVINO package must be pre-installed), see Configuration Guide for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
- Prepare the Docker* image. As a base image, you can use the image from the section Building Docker Image for CPU. To use it for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs you need to rebuild the image with adding the following dependencies:
RUN apt-get update && \
apt-get install -y libboost-filesystem1.58.0 libboost-thread1.58.0
- Run
hddldaemon on the host in a separate terminal session using the following command: $HDDL_INSTALL_DIR/hddldaemon
Run the Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To run the built Docker* image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, use the following command:
docker run --device=/dev/ion:/dev/ion -v /var/tmp:/var/tmp -ti <image_name>
NOTE:
- The device
/dev/ion need to be shared to be able to use ion buffers among the plugin, hddldaemon and the kernel.
- Since separate inference tasks share the same HDDL service communication interface (the service creates mutexes and a socket file in
/var/tmp), /var/tmp needs to be mounted and shared among them.
Use a Docker* Image for FPGA
Build a Docker* Image for FPGA
FPGA card is not available in container by default, but it can be mounted there with the following pre-requisites:
- FPGA device is up and ready to run inference.
- FPGA bitstreams were pushed to the device over PCIe.
To build a Docker* image for FPGA:
- Set additional environment variables in the
Dockerfile:
ENV CL_CONTEXT_COMPILER_MODE_INTELFPGA=3
ENV DLA_AOCX=/opt/intel/openvino/a10_devkit_bitstreams/2-0-1_RC_FP11_Generic.aocx
ENV PATH=/opt/altera/aocl-pro-rte/aclrte-linux64/bin:$PATH
- Install the following UDEV rule:
cat <<EOF > fpga.rules
KERNEL=="acla10_ref*",GROUP="users",MODE="0660"
EOF
sudo cp fpga.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules
sudo udevadm trigger
sudo ldconfig
Make sure that a container user is added to the "users" group with the same GID as on host.
Run the Docker* container for FPGA
To run the built Docker* container for FPGA, use the following command:
docker run --rm -it \
--mount type=bind,source=/opt/intel/intelFPGA_pro,destination=/opt/intel/intelFPGA_pro \
--mount type=bind,source=/opt/altera,destination=/opt/altera \
--mount type=bind,source=/etc/OpenCL/vendors,destination=/etc/OpenCL/vendors \
--mount type=bind,source=/opt/Intel/OpenCL/Boards,destination=/opt/Intel/OpenCL/Boards \
--device /dev/acla10_ref0:/dev/acla10_ref0 \
<image_name>
Additional Resources
OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
OpenVINO™ toolkit documentation: https://software.intel.com/en-us/openvino-toolkit/documentation/featured
Intel® Neural Compute Stick 2 Get Started: https://software.intel.com/en-us/neural-compute-stick/get-started