HyperAIHyperAI

Command Palette

Search for a command to run...

MaskGIT: Masked Generative Image Transformer

Huiwen Chang Han Zhang Lu Jiang Ce Liu* William T. Freeman

Abstract

Generative transformers have experienced rapid popularity growth in thecomputer vision community in synthesizing high-fidelity and high-resolutionimages. The best generative transformer models so far, however, still treat animage naively as a sequence of tokens, and decode an image sequentiallyfollowing the raster scan ordering (i.e. line-by-line). We find this strategyneither optimal nor efficient. This paper proposes a novel image synthesisparadigm using a bidirectional transformer decoder, which we term MaskGIT.During training, MaskGIT learns to predict randomly masked tokens by attendingto tokens in all directions. At inference time, the model begins withgenerating all tokens of an image simultaneously, and then refines the imageiteratively conditioned on the previous generation. Our experiments demonstratethat MaskGIT significantly outperforms the state-of-the-art transformer modelon the ImageNet dataset, and accelerates autoregressive decoding by up to 64x.Besides, we illustrate that MaskGIT can be easily extended to various imageediting tasks, such as inpainting, extrapolation, and image manipulation.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp