HyperAIHyperAI
2 months ago

MaskGIT: Masked Generative Image Transformer

Chang, Huiwen ; Zhang, Han ; Jiang, Lu ; Liu, Ce ; Freeman, William T.
MaskGIT: Masked Generative Image Transformer
Abstract

Generative transformers have experienced rapid popularity growth in thecomputer vision community in synthesizing high-fidelity and high-resolutionimages. The best generative transformer models so far, however, still treat animage naively as a sequence of tokens, and decode an image sequentiallyfollowing the raster scan ordering (i.e. line-by-line). We find this strategyneither optimal nor efficient. This paper proposes a novel image synthesisparadigm using a bidirectional transformer decoder, which we term MaskGIT.During training, MaskGIT learns to predict randomly masked tokens by attendingto tokens in all directions. At inference time, the model begins withgenerating all tokens of an image simultaneously, and then refines the imageiteratively conditioned on the previous generation. Our experiments demonstratethat MaskGIT significantly outperforms the state-of-the-art transformer modelon the ImageNet dataset, and accelerates autoregressive decoding by up to 64x.Besides, we illustrate that MaskGIT can be easily extended to various imageediting tasks, such as inpainting, extrapolation, and image manipulation.

MaskGIT: Masked Generative Image Transformer | Latest Papers | HyperAI