TVM From Beginner to Mastery | Install TVM (Part 2)

Contents at a glance:There are three ways to install TVM: installing from source code, installing using Docker images, and installing using NNPACK Contrib. This article explains how to install using Docker images and NNPACK Contrib.
Keywords:TVM Docker Basic Tutorial
This article was first published on WeChat official account: HyperAI
Welcome back to TVM Documentation 101. This series will continue the daily teaching of TVM.
Previous issue Part 1 We have introduced how to install TVM from source code. This issue will continue with the preparations before learning TVM.Explains how to install TVM via Docker image and NNPACK Contrib.
TVM installation Docker image installation
Developers can use Docker tool scripts to set up a development environment, which is also helpful for running TVM Demo and tutorials.
Requires Docker
https://docs.docker.com/engine/installation
If you use CUDA you will need nvidia-docker.
https://github.com/NVIDIA/nvidia-docker
Get the TVM source distribution or clone the GitHub repository to get the helper scripts:
git clone --recursive https://github.com/apache/tvm tvm
Use the following command to start the Docker image:
/path/to/tvm/docker/bash.sh <image-name>
After the local build is completed, the image-name here can be a local Docker image name, for example:tvm.ci_cpu .
This helper script enables:
- Mount the current directory to /workspace
- Switch user to the user that invokes bash.sh (so you can read/write to the host system)
- Use the host's network on Linux. Since the host network driver is not supported, use bridged networking on macOS and expose port 8888 to use Jupyter Notebook.
Start Jupyter Notebook by typing:
jupyter notebook
If you see an error when launching Jupyter Notebook on macOS OSError: [Errno 99] Cannot assign requested address, you can change the bound IP address by:
jupyter notebook --ip=0.0.0.0
Note that on macOS, since we are using bridged networking, Jupyter Notebook will be reported in a window similar to http://{container_hostname}:8888/?token=… When pasting in the browser, you need to replace container_hostname Replace with localhost .
Docker source code
View Docker source code: Build your own Docker image.
https://github.com/apache/tvm/tree/main/docker
Run the following command to build the Docker image:
/path/to/tvm/docker/build.sh <image-name>
You can also use unofficial third-party pre-built images. Note: these images are for testing and are not ASF versions.
https://hub.docker.com/r/tlcpack
TVM installation: NNPACK Contrib installation
NNPACK is an acceleration package for neural network computations that can run on CPUs with x86-64, ARMv7, or ARM64 architectures.Using NNPACK, high-level libraries like MXNet can speed up execution on multi-core CPU computers, including laptops and mobile devices.
Since TVM already has natively tuned scheduling, NNPACK is mainly for reference and comparison. For general use, the natively tuned TVM implementation is better.
TVM supports NNPACK for forward propagation in convolutional, max pooling, and fully connected layers (inference only).In this document, we provide a high-level overview of how to use NNPACK with TVM.
condition
The underlying implementation of NNPACK uses a variety of acceleration methods, including fft and winograd.These algorithms work better on certain specific batch sizes, kernel sizes, and stride settings than others, so not all convolutional, max pooling, or fully connected layers may be supported by NNPACK depending on the context.
NNPACK is only supported on Linux and OS X systems, and currently not on Windows.
Build/Install NNPACK
If the trained model meets some conditions for using NNPACK, you can build a TVM that supports NNPACK.
Please follow these simple steps: Build NNPACK shared library using the following command. TVM will dynamically link NNPACK.
NOTE: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
Building Ninja
NNPACK requires the latest version of Ninja. So we need to install ninja from source.
git clone git://github.com/ninja-build/ninja.git
cd ninja
./configure.py --bootstrap
Set the environment variable PATH to tell bash where to find the ninja executable. For example, suppose we cloned ninja on our home directory ~. Then we can add the following line in ~/.bashrc.
export PATH="${PATH}:~/ninja"
Building NNPACK
CMAKE new version of NNPACK download separately Peach and other dependencies
https://github.com/Maratyszcza/PeachPy
NOTE: At least on OS X, running ninja install below will overwrite the googletest libraries installed in /usr/local/lib. If you build googletest again to replace the nnpack copy, be sure to pass -DBUILD_SHARED_LIBS=ON to cmake.
git clone --recursive https://github.com/Maratyszcza/NNPACK.git
cd NNPACK
# 在 CFLAG 和 CXXFLAG 中添加 PIC 选项以构建 NNPACK 共享库
sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
mkdir build
cd build
# 生成 ninja 构建规则并在配置中添加共享库
cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
ninja
sudo ninja install
# 在你的 ldconfig 中添加 NNPACK 的 lib 文件夹
echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
sudo ldconfig
Building TVM with NNPACK support
git clone --recursive https://github.com/apache/tvm tvm
* exist config.cmake Medium Settings set(USE_NNPACK ON) .
* Will NNPACK_PATH Set to $(YOUR_NNPACK_INSTALL_PATH) After configuration, use make to build TVM
make
Log in to tvm.hyper.ai to view the original document. Hyper AI will continue to update the Chinese TVM tutorial in the future, so please stay tuned~
-- over--