HyperAI

Shanghai Offline Event | AI Compiler Practice and Innovation in the LLM Era

a year ago
Information
Bina
特色图像

In March this year, The first offline event of the 2023 Meet TVM series started from Shanghai and spanned multiple cities.We are committed to providing a learning and communication platform for engineers from all over the world who are interested in AI compilers.

December 16 2023 Meet TVM · The year-end party will return to Shanghai,This time we not only invited 4 senior AI compiler experts to bring you wonderful sharing,A round-table discussion session was also added.Apache TVM PMC and Shanghai Jiao Tong University Ph.D. Feng Siyuan will serve as the host and discuss the innovations and challenges of machine learning systems in the era of big models from a more diverse perspective.

As Christmas is approaching, we welcome everyone to add Christmas elements to their outfits. We will also prepare exquisite tea breaks and small gifts for everyone on site.Looking forward to having a Christmas-filled AI compiler party with you all!

⏰ Time:December 16 (Saturday) 13:30-17:40

Place:2F Lecture Hall, Shanghai Wujiaochang Innovation and Entrepreneurship College (No. 322, Daxue Road, Yangpu District)

Number of people:200 (Onsite seats are limited, please register as early as possible)

Sign up:Scan the QR code below to register

Scan the QR code and note "TVM Year-End Party" to join the event group:

schedule:

@冯思远

Share topic: Deep Dive into TVM Unity

Contents:After more than a year of iteration and upgrading, TVM Unity is expected to be merged into the main branch of Apache TVM in the near future, and will then become the main compilation process of Apache TVM.This sharing will introduce the default compilation process of TVM Unity, the difference from the existing TVM, and how to migrate the existing workflow to Unity.

By watching this sharing session, you will learn:

1. TVM Unity structural design

2. TVM Unity's default compilation process

3. How to migrate existing workflow to TVM Unity

Share topic:Slim-LM: Unified Compilation Framework for Agile Development

Contents:As MLC LLM and TVM Unity gradually develop, more and more users want to try to use TVM to deploy their own models. However, there are many challenges in how to use TVM flexibly and efficiently to build and compile models.

Based on this demand,The TVM community recently launched a new framework, Slim-LM, to simplify the TVM model building and compilation process.Contains three major features:

1. PyTorch-style code definition TVM model

2. Simple and efficient compilation tools

3. Unified Quantization Algorithm Framework

By watching this sharing session, you will learn:

1. Some infrastructure of Slim-LM and the convenience it provides

2. Define/compile a new model using Slim-LM

3. Basic steps of Slim-LM to implement the new quantization algorithm

Share topic:Compilation optimization practice based on TVM

Contents:With the iterative development of AI compilers, their usability and ease of use have been greatly improved.As one of the representatives of AI compilers, how to maximize the role of TVM, improve its own shortcomings, and implement it on a large scale in business scenarios such as search, broadcast, and promotion is an important issue.This sharing will mainly introduce from the above perspectives.

By watching this sharing session, you will learn:

1. TVM capability scope in typical business scenarios

2. TVM enhances the capabilities of computationally intensive operators

3. TVM operator fusion and device allocation optimization

Share topic:Towards seamless model compilation integration

Content Introduction: Model compilation is becoming increasingly important in AI acceleration.However, adopting model compilation for production models in IT companies is not a simple matter. The main burdens include models from different domains, frameworks, or formats, transitioning from existing libraries, and adopting new ASICs. To address these issues, ByteIR was developed to improve the productivity of model compilation. ByteIR is built on top of OpenXLA and the LLVM/MLIR compiler infrastructure.It includes front-end, compiler, and runtime components, each of which solves different problems. These three components can work together or choose to work independently to meet different business needs.

By watching this sharing session, you will learn:

1. Design of ByteIR compiler based on MLIR

2. Performance optimization practice based on MLIR compilation stack

Roundtable topics:Machine Learning Systems in the Era of Big Models

Organizers and partners

As the organizer of this event, the MLC.AI community was established in June 2022. Led by Chen Tianqi, the main inventor of Apache TVM and a well-known young scholar in the field of machine learning, the team launched the MLC online course, which systematically introduced the key elements and core concepts of machine learning compilation.

In November 2022, with the joint efforts of MLC.AI community volunteers, the first complete TVM Chinese documentation was launched and successfully hosted on the HyperAI official website, further providing domestic developers interested in machine learning compilation with the basic settings for accessing and learning a new technology - documentation.

MLC Online Courses:https://mlc.ai/

TVM Chinese Documentation:https://tvm.hyper.ai/

China's leading artificial intelligence and high-performance computing community,We are committed to providing high-quality public resources in the field of data science to domestic developers.So far, it has provided domestic download nodes for more than 1,200 public data sets, supported more than 300 artificial intelligence and high-performance computing related term queries, hosted the complete TVM Chinese documentation, and will soon launch multiple basic and popular tutorials.

Visit the official website:https://hyper.ai/

OpenBayes Bayesian Computing is a leading high-performance computing service provider in ChinaBy grafting classic software ecosystems and machine learning models onto new-generation heterogeneous chips, it provides industrial enterprises and university scientific research with faster and easier-to-use data science computing products. Its products have been adopted by dozens of large industrial scenarios or leading scientific research institutes.

Visit the official website:https://openbayes.com/

CM Space (Xiamen) is a professional innovation park management company under China Merchants Group, operating the "CM Space" professional incubator in Xiamen.Rooted in the southeast coast, relying on the advantages of China Merchants Group's three main businesses: transportation, urban and park comprehensive development, and finance,The focus is to provide startups in the field of artificial intelligence with the resource support most needed in the early stages of development, such as application scenarios, model verification, and seed-stage customers, to assist in the efficient incubation of artificial intelligence companies.

Shanghai Penta Innovation & Entrepreneurship Institute is a non-profit education and service organization jointly established by the Yangpu District Government, Fudan University, Tongji University, Shanghai University of Finance and Economics, Shanghai University of Technology and other famous universities, as well as leading enterprises and investment institutions in the industry.It has brought together many industry leaders, experts and scholars, famous investors/institutions, and entrepreneurial empowerment organizations, aiming to build a "double innovation ecosystem" + "learning community", train entrepreneurs into entrepreneurs, and become an important hub in the entrepreneurial ecosystem.

In November 2022, the Shanghai Wujiaochang Innovation and Entrepreneurship College officially opened the college's public space, opening up 800 square meters of space to the entrepreneurship and innovation crowd, and jointly launched the entrepreneurship and innovation public welfare brand "College Coffee" with partners such as the CUHK Shanghai Center.We invite KOLs, enterprises and social organizations that have a consensus on the public welfare services of mass entrepreneurship and innovation to become the "public welfare partners" of the Academy Café, and jointly design different forms of ways for entrepreneurs to gather, turning it into a brand project with rich connotations, and making the entrepreneurship space an "open, inclusive, shared, and non-closing public living room."

Shanghai Cloud Base (Shanghai Cloud Computing Innovation Base, Shanghai Big Data Innovation Base) is a national professional incubator that started early in China, promoting the development of the cloud computing industry from 0 to 1. With the model of fund + base + platform, with the digital economy industry as the core, focusing on cloud computing, cloud native, big data and artificial intelligence, digital healthcare and other sub-sectors, it has gathered and incubated nearly a thousand outstanding companies at home and abroad.By connecting the four ecosystems of technology, users, capital, and services, we continue to hold the "Scenario Innovation Laboratory" and the "Digital Economy IPO Preparation Camp" to build a digital economy industry accelerator.

Activity line: Scan the QR code to jump to the activity line to register

Scan the QR code and note "TVM Year-end Party" to join the event group

Considering the venue space of this event,We have only opened 200 places for attendance, so we recommend that you register as early as possible to secure your seat.

2023 Meet TVM · Year-end party,December 16, 13:30-17:40,Looking forward to meeting you all in Shanghai!