HyperAI

TVM From Beginner to Master | Install TVM (Part 1)

2 years ago
Information
Jiaxin Sun
特色图像

Content at a glance: There are three ways to install TVM: installing from source code, installing using Docker images, and installing using NNPACK Contrib. This article focuses on how to install TVM from source code.

Keywords: TVM, quick start, source code installation

In previous articles 《TVM Chinese website is officially launched! The most comprehensive machine learning model deployment "reference book" is here》In this article, we introduced the important role of TVM and how to use the TVM Chinese documentation to start exploring machine learning compilers.

Next, we will form a series of tutorials. This article explains in detail the learning path of TVM from entry to mastery, hoping that every developer can become an excellent machine learning compiler engineer!

In this article,We will introduce the key step of "laying the foundation" - installing TVM.

TVM can be installed in three ways:

  1. Install from source
  2. Docker images
  3. NNPACK Contrib Installation

As the installation tutorial (Part 1), this article will explain in detail the best practices for installing from source code. And provide maximum flexibility in configuration and compilation.

Step-by-step instructions on how to install TVM from source

Building and installing TVM packages from 0 to 1 on various systems consists of two steps:

  1. Building a shared library from C++ code
  • Linux: libtvm.so
  • macOS: libtvm.dylib
  • Windows: libtvm.dll

2. Set up for programming language packages (such as Python packages)

To download TVM source code, please visit:https://tvm.apache.org/download

Developers: Get the source code from GitHub

When cloning the source repository from GitHub, use the --recursive option to clone submodules.

git clone --recursive https://github.com/apache/tvm tvm

Windows users can open a Git shell and enter the following command:

git submodule init
git submodule update

Building a shared library

Our goal is to build the shared library:

  • On Linux
    The target libraries are libtvm.so and libtvm_runtime.so
  • On MacOS
    The target libraries are libtvm.dylib and libtvm_runtime.dylib
  • On Windows
    The target libraries are libtvm.dll and libtvm_runtime.dll

It is also possible to build just the runtime library:
https://tvm.hyper.ai/docs/how_to/deploy/

TVM The minimum build requirements for the library are:

  • Latest C++ compiler supporting C++17
    GCC 7.1
    Clang 5.0
    Apple Clang 9.3
    Visual Studio 2019 (v16.7)
  • CMake 3.10 or higher
  • It is recommended to build TVM library with LLVM to enable all features.
  • If you want to use CUDA, please make sure that the CUDA toolkit version is at least 8.0.
    Note: After upgrading from an old CUDA version, please delete the old version and restart.
  • macOS can install Homebrew to facilitate the installation and management of dependencies.
  • Python: Versions 3.7.X+ and 3.8.X+ are recommended. Version 3.9.X+ is not supported yet.

To install these dependencies on Linux operating systems such as Ubuntu/Debian, execute the following command in the terminal:

sudo apt-get update
sudo apt-get install -y python3 python3-dev python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev

Use Homebrew to install the required dependencies for macOS with Intel or M1 chips. Follow the installation steps specified by Homebrew to ensure that these dependencies are correctly installed and configured:

brew install gcc git cmake
brew install llvm
brew install python@3.8

Use cmake to build the library

TVM's configuration can be modified by editing config.cmake and/or passing cmake flags on the command line:

  • If cmake is not installed, you can visit the following official website to download the latest version https://cmake.org/download/
  • Create a build directory and copy cmake/config.cmake to it
mkdir build
cp cmake/config.cmake build
  • edit build/config.cmake Custom compilation options
  • For some versions of Xcode for macOS, you need to add in LDFLAGS -lc++abi, so as to avoid link errors
  • Will set(USE_CUDA OFF) Change to set(USE_CUDA ON) to enable the CUDA backend. Do the same for other backends and libraries you want to build (OpenCL, RCOM, METAL, VULKAN...).
  • For easier debugging, make sure to use set(USE_GRAPH_EXECUTOR ON) and set(USE_PROFILER ON) Enables the embedded graph executor and debugging capabilities.
  • If you need to use IR debugging, you can set set(USE_RELAY_DEBUG ON), and set the environment variable TVM_LOG_DEBUG.
  • TVM requires LLVM for CPU code generation tools (Codegen). It is recommended to build with LLVM.
  • Building with LLVM requires LLVM 4.0 or higher. Note that the default LLVM version in apt may be lower than 4.0.
  • Since building LLVM from source takes a long time, it is recommended to download a pre-built version from the LLVM download page.
    1. Unzip to a specific location and modify build/config.cmake To add set(USE_LLVM /path/to/your/llvm/bin/llvm-config)
    2. Or directly set set(USE_LLVM ON), use CMake to search for an available LLVM version.
  • You can also use the LLVM Ubuntu daily builds
    Note that apt-package will append the version number to llvm-config. For example, if you installed LLVM version 10, you would set set(USE_LLVM llvm-config-10)
  • PyTorch User Recommended Settings set(USE_LLVM "/path/to/llvm-config --link-static") and set(HIDE_PRIVATE_SYMBOLS ON) This is to avoid potential symbol conflicts between different versions of LLVM used by TVM and PyTorch.
  • On some supported platforms, Ccache compiler wrapper can help reduce TVM build time. Ways to enable CCache in TVM build include:
    1. Masquerade mode of Ccache. Usually enabled during the Ccache installation process. To let TVM use Ccache in masquerade, just specify the appropriate C/C++ compiler path when configuring TVM's build system. For example:cmake -DCMAKE_CXX_COMPILER=/usr/lib/ccache/c++ ...
    2. Ccache is used as the C++ compiler prefix of CMake. When configuring TVM's build system, set the CMake variable CMAKE_CXX_COMPILER_LAUNCHER to a suitable value. For example:cmake -DCMAKE_CXX_COMPILER_LAUNCHER=ccache ...
  • Build TVM and related libraries:
cd build
cmake ..
make -j4

You can use Ninja to speed up the build

cd build
cmake .. -G Ninja
ninja

There is also a Makefile in the root directory of TVM, which can automatically complete several steps: Create a build directory and change the default config.cmake Copy it to the build directory, run cmake, and run make.

The build directory can be set using environment variables TVM_BUILD_PATH To specify. If TVM_BUILD_PATH Without this setting, the Makefile will assume that the TVM build directory should be used. TVM_BUILD_PATH The specified path can be an absolute path or a path relative to the TVM root directory. TVM_BUILD_PATH If set to a space-delimited list of paths, all listed paths will be created.

If you use another build directory, you should set the environment variable TVM_LIBRARY_PATH at runtime to point to the compiled libtvm.so and libtvm_runtime.so If not set, TVM will look for the location relative to the TVM Python module. TVM_BUILD_PATH Unlike , this must be an absolute path.

# 在 "build" 目录下构建
make

# 替代位置,"build_debug"
TVM_BUILD_PATH=build_debug make

# 同时构建 "build_release" 和 "build_debug"
TVM_BUILD_PATH="build_debug build_release" make

# 使用调试构建
TVM_LIBRARY_PATH=~/tvm/build_debug python3

If everything goes well, we can check the installation of Python packages.

Building with Conda Environment

Conda can be used to obtain the necessary dependencies required to run TVM. If Conda is not installed, please refer to Conda Installation Guide To install Miniconda or Anaconda, run the following command in the Conda environment:

# 用 yaml 指定的依赖创建 Conda 环境
conda env create --file conda/build-environment.yaml
# 激活所创建的环境
conda activate tvm-build

The above command will install all necessary build dependencies such as CMake and LLVM. Next you can run the standard build process from the previous section.

To use compiled binaries outside of a Conda environment, you can set LLVM to static link mode. set(USE_LLVM "llvm-config --link-static")This way, the generated library will not depend on the dynamic LLVM libraries in the Conda environment.

The above shows how to use Conda to provide the necessary dependencies to build libtvm. If you already use Conda as a package manager and want to build and install TVM directly as a Conda package, you can follow the instructions below:

conda build --output-folder=conda/pkg  conda/recipe
# 在启用 CUDA 的情况下运行 conda/build_cuda.sh 来构建
conda install tvm -c ./conda/pkg

Building on Windows

TVM supports building with CMake via MSVC. A Visual Studio compiler is required. The minimum version of VS is Visual Studio Enterprise 2019

Note: For full testing details for GitHub Actions, visit the Windows 2019 Runner:

https://github.com/actions/virtual-environments/blob/main/images/win/Windows2019-Readme.md

It is officially recommended to use the Conda environment for building to obtain the necessary dependencies and activate the tvm-build environment.

Run the following command line:

mkdir build
cd build
cmake -A x64 -Thost=x64 ..
cd ..

The above command generates the solution file in the build directory. Then run:

cmake --build build --config Release -- /m

Building ROCm support

Currently, ROCm is only supported on Linux, so all tutorials are written based on Linux.

  • Set set(USE_ROCM ON) and set ROCM_PATH to the correct path.
  • You need to install HIP runtime from ROCm first. Make sure ROCm is installed on your installation system.
  • Install the latest stable version of LLVM (v6.0.1), as well as LLD, and make sure ld.lld is available from the command line.

Python package installation

TVM Package

This section introduces how to use virtual environments and package managers such as virtualenv or conda to manage Python packages and dependencies.
The Python package is located in tvm/python. There are two ways to install it:

  • Method 1

This method is suitable for developers who may modify the code.

Set the PYTHONPATH environment variable to tell Python where to find the library. For example, suppose we /path/to/tvm The directory has tvm cloned, we can ~/.bashrc Add the following code to: This allows you to pull code and rebuild the project without having to call setup again, and the changes will be reflected immediately.

export TVM_HOME=/path/to/tvm
export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
  • Method 2

Install TVM's Python bindings via setup.py:

# 为当前用户安装 TVM 软件包
# 注意:如果你通过 homebrew 安装了 Python,那么在安装过程中就不需要 --user
#        它将被自动安装到你的用户目录下。
#        在这种情况下,提供 --user 标志可能会在安装时引发错误。
export MACOSX_DEPLOYMENT_TARGET=10.9  # 这是 mac 所需要的,以避免与 libstdc++ 的符号冲突
cd python; python setup.py install --user; cd ..

Python Dependencies

Note that if you want to install into a managed local environment, such as virtualenv, then no need --user Logo.

  • Necessary dependencies:
pip3 install --user numpy decorator attrs
  • Using RPC Tracker
pip3 install --user tornado
  • Using the auto-tuning module
pip3 install --user tornado psutil xgboost cloudpickle

Note: On a Mac with an M1 chip, install xgboost / scipy You may encounter some problems. Scipy and xgboost require additional dependencies such as openblas to be installed. Run the following command line to install scipy and xgboost as well as the required dependencies and configuration:

brew install openblas gfortran

pip install pybind11 cython pythran

export OPENBLAS=/opt/homebrew/opt/openblas/lib/

pip install scipy --no-use-pep517

pip install xgboost

Install Contrib Library

NNPACK Contrib installation, view
https://tvm.hyper.ai/docs/install/nnpack

Enable C++ tests

You can use Google Test to drive C++ tests in TVM. The easiest way to install GTest is to install it from source code:

git clone https://github.com/google/googletest
cd googletest
mkdir build
cd build
cmake -DBUILD_SHARED_LIBS=ON ..
make
sudo make install

After successful installation, you can use ./tests/scripts/task_cpp_unittest.sh to build and launch the C++ tests, or directly with make cpptest Build.

That’s all for this tutorial – Part 1 of Installing TVM.In Part 2, we will continue to explain two other TVM installation methods: Docker image installation and NNPACK Contrib installation.

Welcome everyone to continue to pay attention tvm.hyper.ai, learn about the best developments of TVM Chinese!

-- over--