HyperAIHyperAI

Command Palette

Search for a command to run...

AdaViT: Adaptive Tokens for Efficient Vision Transformer

Hongxu Yin Arash Vahdat Jose M. Alvarez Arun Mallya Jan Kautz Pavlo Molchanov

Abstract

We introduce A-ViT, a method that adaptively adjusts the inference cost ofvision transformer (ViT) for images of different complexity. A-ViT achievesthis by automatically reducing the number of tokens in vision transformers thatare processed in the network as inference proceeds. We reformulate AdaptiveComputation Time (ACT) for this task, extending halting to discard redundantspatial tokens. The appealing architectural properties of vision transformersenables our adaptive token reduction mechanism to speed up inference withoutmodifying the network architecture or inference hardware. We demonstrate thatA-ViT requires no extra parameters or sub-network for halting, as we base thelearning of adaptive halting on the original network parameters. We furtherintroduce distributional prior regularization that stabilizes training comparedto prior ACT approaches. On the image classification task (ImageNet1K), we showthat our proposed A-ViT yields high efficacy in filtering informative spatialfeatures and cutting down on the overall compute. The proposed method improvesthe throughput of DeiT-Tiny by 62% and DeiT-Small by 38% with only 0.3%accuracy drop, outperforming prior art by a large margin. Project page athttps://a-vit.github.io/


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp