HyperAI

Event Preview | 2023 Meet TVM · Shenzhen Station Is Scheduled, Inviting You to Join Us on a Cutting-edge AI Compiler Technology Journey!

2 years ago
Information
Xuran Zhang
特色图像

Contents at a glance:The 3rd offline meetup of 2023 Meet TVM will be held on September 16 at Tencent Building in Shenzhen! This meetup includes 5 wonderful talks about AI compilers. We look forward to meeting you in Shenzhen!

Keywords:Compiler Offline Event 2023MeetTVM

This article was first published on HyperAI WeChat public platform~

In March and June of this year, the 2023 Meet TVM series of events were successfully held in Shanghai and Beijing respectively.More than 300 partners from major manufacturers and research institutes gathered together for extensive exchanges and discussions both online and offline.

Read previous articles:

Beijing Station:Event Review | Gathering industry technology experts and sharing moments of thought collision, 2023 Meet TVM · Beijing Station concluded successfully

Shanghai Station:Event Review | 2023 Meet TVM first gathered in Shanghai, more than 100 engineers discussed the present and future of machine learning compilation

In mid-September, the 3rd offline TVM Meetup will be held in Shenzhen.This time we have invited 5 senior AI compiler experts, who will give wonderful presentations to the friends present at the Tencent Building in Shenzhen.

This event is hosted by MLC.AI and HyperAI, and sponsored by OpenBayes and Tencent AI Lab. We have also prepared exquisite peripheral gifts and tea breaks for everyone on site. Everyone is welcome to come and have fun!

2023 Meet TVM Shenzhen Event Information

⏰  time:September 16 (Saturday) 13:30-17:30

📍  Place:Multi-function Hall, 2F, Tencent Building, 10000 Shennan Road, Nanshan District, Shenzhen

👬  Number of people:200 (Onsite seats are limited, please register as early as possible) 

🙌🏻  Sign up:Scan the QR code below to register

Warm reminder: Visitor information is required to enter Tencent Building. Please be sure to fill in your personal information accurately to avoid affecting your entry. Thank you for your cooperation.

Scan the QR code and note "TVM Shenzhen" to join the event group:

📝 Schedule:

Brief introduction of guests and content 

Share topic:Dynamic shape compilation optimization based on TVM

Contents:Traditional deep learning compilers (including TVM) lack dynamic shape support and are relatively weak in handling large language models (dynamic sequence length) and detection models (dynamic width/height). Based on this situation, we designed and implemented a CPU-side dynamic shape operator optimization solution based on TVM, which outperforms existing static shape solutions and requires almost no search time.

Watch this sharing session and you will learn:

1. Challenges of dynamic shape optimization

2. TVM community’s dlight related work

3. Difficulties and solutions for dynamic shape optimization on the CPU side

Share topic:Design an AI Processor: Compiler is Dominant

Contents:With the development and popularization of AIGC represented by large language models, the demand for computing power has increased exponentially. Therefore, the design of AI processor chips and the corresponding programming have become more complicated.

How to make both simpler and more efficient? Automated compiler-computing architecture joint design is a potential solution.

Watch this sharing session and you will learn:

1. Product landscape of AI processors

2. Recent Research Status of Automated AI Processor Design

3. Basic compilation framework for automated design of AI processors

Share topic:MLIR and its AI graph compilation practice

Contents:With the booming development of AI chips and AI frameworks, AI compilers have also developed, such as XLA, TVM, etc. MLIR, as a general and reusable compiler framework, is currently widely used in AI compilation systems because it can help hardware manufacturers quickly build DS AI compilers.

This sharing mainly introduces some basic knowledge elements of MLIR, the Codegen process of MLIR, and the practical steps for building an AI compiler. In addition, we will also discuss with you the ideas of MLIR to solve the key problems of AI compilers.
Watch this sharing session and you will learn:

1. Building blocks of an AI compiler

2. Basic knowledge and uses of MLIR

3. Basic steps for building an AI compiler using MLIR

Share topic:Design and implementation of an AI compiler based on MLIR

Contents:There are many different software frameworks in the field of AI and machine learning (such as TensorFlow, PyTorch, etc.), and hardware devices are becoming increasingly diverse (CPU, GPU, TPU, etc.). As a bridge connecting the two, AI compilers face many challenges.

As a compiler infrastructure, MLIR provides a series of reusable and easily extensible basic components for building domain-specific compilers. Tencent has built an end-to-end AI compiler based on MLIR to provide compilation optimization for users' AI models, thereby simplifying the deployment of models on a variety of AI chips and achieving maximum performance.
Watch this sharing session and you will learn:

1. The overall process of Tencent AI compiler

2. Introduction to some MLIR infrastructure and the convenience it provides

3. Introduction to tiling and fusion based on Linalg dialect

Share topic:Opportunities and Challenges of Machine Learning Systems in the Era of Big Models

Contents:Significant progress has been made in the field of generative artificial intelligence and large language models (LLMs), which have remarkable capabilities and the potential to fundamentally change many fields. At the same time, they have brought new opportunities and challenges to machine learning systems. On the one hand, the huge demand for computing power has led to an increase in the demand for system optimization; on the other hand, the single model structure and high-performance hardware requirements have caused the originally open machine learning ecosystem to begin to converge.

Watch this sharing session and you will learn:

1. The current status of machine learning systems in the era of large models

2. Recent progress and updates of MLC-LLM

3. Outlook of machine learning systems in the post-large model era

Introduction to the organizers and partners

As the organizer of this event, the MLC.AI community was established in June 2022. Led by Chen Tianqi, the main inventor of Apache TVM and a well-known young scholar in the field of machine learning, the team launched the MLC online course, which systematically introduced the key elements and core concepts of machine learning compilation.

In November 2022, with the joint efforts of MLC.AI community volunteers,The first complete TVM Chinese document was launched and successfully hosted on the HyperAI official website.It further provides domestic developers who are interested in machine learning compilation with the basic settings - documentation - to access and learn a new technology.

In the fourth quarter of 2023, the "2023 Meet TVM" series of events will be held in Hangzhou, and enterprises and community partners are welcome to participate in co-creation.

MLC Online Courses:https://mlc.ai/

TVM Chinese Documentation:https://tvm.hyper.ai/

China's leading artificial intelligence and high-performance computing community,We are committed to providing high-quality public resources in the field of data science to domestic developers.So far, it has provided domestic download nodes for more than 1,200 public data sets, supported more than 300 artificial intelligence and high-performance computing related term queries, hosted the complete TVM Chinese documentation, and will soon launch multiple basic and popular tutorials.

Visit the official website:https://orion.hyper.ai/

OpenBayes is a leading high-performance computing service provider in China.By grafting classic software ecosystems and machine learning models onto the new generation of heterogeneous chips, it provides industrial enterprises and university scientific research with faster and easier-to-use data science computing products. Its products have been adopted by dozens of large industrial scenarios or leading scientific research institutes.

Visit the official website:https://openbayes.com/

Tencent AI Lab is Tencent's enterprise-level AI laboratory. It was established in Shenzhen in April 2016 and currently has more than 100 top research scientists and more than 300 application engineers. With Tencent's long-term accumulation of rich application scenarios, big data, computing power and first-class talents,AI Lab is committed to the future, openness to collaboration, and continuous improvement of AI’s cognition, decision-making, and creativity, moving towards the vision of “Make AI Everywhere”.

Tencent AI Lab emphasizes the development of both research and application.Basic research focuses on four major directions: machine learning, computer vision, speech technology, and natural language processing; technology applications focus on four major areas: games, digital humans, content, and social interaction, and initially explore the research and application of AI in industry, agriculture, healthcare, medicine, life sciences, and other fields.

Active row:Scan the QR code to jump to the event registration

Scan code notes TVM ShenzhenJoin the activity group


Considering the venue space of this event,We have only opened 200 places for attendance.It is recommended that you register as early as possible to secure your seat.

The 2023 Meet TVM series of events is now open.Looking forward to meeting you all in Shenzhen from 13:30 to 17:30 on September 16!

This article was first published on HyperAI WeChat public platform~