HyperAI

2023 Meet TVM First Gathered in Shanghai, More Than 100 Engineers Discussed the Present and Future of Machine Learning Compilation

2 years ago
Information
Jiaxin Sun
特色图像

On March 4, the 2023 Meet TVM offline gathering was successfully held in Shanghai, hosted by the MLC.AI community, co-organized by Shanghai Wujiaochang Innovation and Entrepreneurship College, HyperAI Super Neural Network and OpenBayes Bayesian Computing.More than 100 friends from Shanghai, Hangzhou, Beijing and Nanjing gathered in Shanghai for a lively face-to-face discussion.


On the day of the event,Chen Tianqi, the main inventor of TVM and a famous young scholar in the field of machine learning,We also prepared an opening video for everyone to share our analysis of machine learning compilation trends and the subsequent development plan of Apache TVM.

The following are some of the key points:

Hello everyone in the TVM Chinese community, I am Chen Tianqi. Thank you very much for participating in this Meet TVM Shanghai event, and I am also very grateful for the support of the local organization.

Artificial intelligence and its deployment have changed dramatically over the past few years.Machine learning is no longer something driven solely by algorithms. Both data algorithms and the system itself affect the success of machine learning system deployment.Machine learning compilation has also gradually entered the public eye from a field that has just begun to be explored at the forefront.

The TVM community has been working in this direction for 5 years.We also always know that we must constantly innovate ourselves and summarize our past experiences.Only then can we continue to bring the entire field, including machine learning compilation and machine learning systems, into the next stage.

Since last year, the TVM community has also made a very bold change and attempt to promote the TVM Unity solution, hoping to fundamentally solve various issues including dynamic shapes, various hardware deployments, and integration of operator libraries and machine learning automatic optimization. We also hope to make iterative development our primary goal.This allows those who are optimizing algorithms and systems to continue to iterate under a Python framework.

Last year, we also launched the MLC online course to explain relevant machine learning content to everyone.This year we will gradually connect Unity end-to-end and apply it to various actual models.We also welcome everyone to join the community development and co-construction to bring the entire machine learning compilation and the tool chain of machine learning TVM itself into the next stage.

Get the brief introduction and PPT of the live sharing

On the day of the event, we invited 4 speakers to give live presentations.

Share topic:TVM and the Development of Machine Learning Compilation

Contents:Machine learning compilers have been continuously improving their deployment performance in recent years, and TVM has always hoped to achieve performance improvements through "automation" technology. However, with the addition of new hardware, the development of compilers has also encountered many challenges including performance and versatility.

The Unity design (TensorIR+Relax) currently launched by the TVM community attempts to fundamentally improve the versatility and customization capabilities of TVM and provide a more comprehensive infrastructure.

Get the PPT:Follow the WeChat public account "HyperAI Super Neural Network" and reply with the keyword in the background TVM Shanghai, get the complete PPT.

Share topic:Use TVM to compile models that support Rockchip devices

Contents:In order to facilitate model deployment on Rockchip devices and improve performance, we implemented a new backend on TVM to support model compilation for Rockchip devices.

Get the PPT:Follow the WeChat public account "HyperAI Super Neural Network" and reply with the keyword in the background TVM Shanghai, get the complete PPT.

Share topic:DSA AI compiler built on TVM

Contents:TVM was open sourced in 2017 and natively supports GPU and CPU. How to build an AI compiler for DSA/NPU based on TVM? This sharing will try to answer this question through our practice.

Get the PPT:Follow the WeChat public account "HyperAI Super Neural Network" and reply with the keyword in the background TVM Shanghai, get the complete PPT.

Share topic:Extending SYCL backend for TVM

Contents:We have expanded the cross-heterogeneous programming model SYCL for TVM and added a path for SYCL code generation and automatic tuning. We have expanded the hardware support for NPU for SYCL and added support for instruction compilation and runtime adaptation for domestic NPUs. In the future, we will further open up the full-stack technology path from TVM to SYCL and then to other domestic NPU instruction compilation.

Get the PPT:Follow the WeChat public account "HyperAI Super Neural Network" and reply with the keyword in the background TVM Shanghai, get the complete PPT.

2023 Meet TVM More cities are coming soon

Next,The detailed content shared by the guests of this event will be released and pushed on this official account one after another. Please continue to pay attention!

At the same time, 2023 Meet TVM will be launched in many cities across the country.We welcome partners from academia and business to join us in this joint project. We look forward to seeing you in Beijing in June!

Finally, let's take a group photo of the scene

2023 Meet TVM Shanghai event group photo