AdaViT: Adaptive Tokens for Efficient Vision Transformer

We introduce A-ViT, a method that adaptively adjusts the inference cost ofvision transformer (ViT) for images of different complexity. A-ViT achievesthis by automatically reducing the number of tokens in vision transformers thatare processed in the network as inference proceeds. We reformulate AdaptiveComputation Time (ACT) for this task, extending halting to discard redundantspatial tokens. The appealing architectural properties of vision transformersenables our adaptive token reduction mechanism to speed up inference withoutmodifying the network architecture or inference hardware. We demonstrate thatA-ViT requires no extra parameters or sub-network for halting, as we base thelearning of adaptive halting on the original network parameters. We furtherintroduce distributional prior regularization that stabilizes training comparedto prior ACT approaches. On the image classification task (ImageNet1K), we showthat our proposed A-ViT yields high efficacy in filtering informative spatialfeatures and cutting down on the overall compute. The proposed method improvesthe throughput of DeiT-Tiny by 62% and DeiT-Small by 38% with only 0.3%accuracy drop, outperforming prior art by a large margin. Project page athttps://a-vit.github.io/