HyperAIHyperAI
2 months ago

Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding

Lee, Kenton ; Joshi, Mandar ; Turc, Iulia ; Hu, Hexiang ; Liu, Fangyu ; Eisenschlos, Julian ; Khandelwal, Urvashi ; Shaw, Peter ; Chang, Ming-Wei ; Toutanova, Kristina
Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
  Understanding
Abstract

Visually-situated language is ubiquitous -- sources range from textbooks withdiagrams to web pages with images and tables, to mobile apps with buttons andforms. Perhaps due to this diversity, previous work has typically relied ondomain-specific recipes with limited sharing of the underlying data, modelarchitectures, and objectives. We present Pix2Struct, a pretrainedimage-to-text model for purely visual language understanding, which can befinetuned on tasks containing visually-situated language. Pix2Struct ispretrained by learning to parse masked screenshots of web pages into simplifiedHTML. The web, with its richness of visual elements cleanly reflected in theHTML structure, provides a large source of pretraining data well suited to thediversity of downstream tasks. Intuitively, this objective subsumes commonpretraining signals such as OCR, language modeling, image captioning. Inaddition to the novel pretraining strategy, we introduce a variable-resolutioninput representation and a more flexible integration of language and visioninputs, where language prompts such as questions are rendered directly on topof the input image. For the first time, we show that a single pretrained modelcan achieve state-of-the-art results in six out of nine tasks across fourdomains: documents, illustrations, user interfaces, and natural images.