HyperAIHyperAI
2 months ago

DePlot: One-shot visual language reasoning by plot-to-table translation

Liu, Fangyu ; Eisenschlos, Julian Martin ; Piccinno, Francesco ; Krichene, Syrine ; Pang, Chenxi ; Lee, Kenton ; Joshi, Mandar ; Chen, Wenhu ; Collier, Nigel ; Altun, Yasemin
DePlot: One-shot visual language reasoning by plot-to-table translation
Abstract

Visual language such as charts and plots is ubiquitous in the human world.Comprehending plots and charts requires strong reasoning skills. Priorstate-of-the-art (SOTA) models require at least tens of thousands of trainingexamples and their reasoning capabilities are still much limited, especially oncomplex human-written queries. This paper presents the first one-shot solutionto visual language reasoning. We decompose the challenge of visual languagereasoning into two steps: (1) plot-to-text translation, and (2) reasoning overthe translated text. The key in this method is a modality conversion module,named as DePlot, which translates the image of a plot or chart to a linearizedtable. The output of DePlot can then be directly used to prompt a pretrainedlarge language model (LLM), exploiting the few-shot reasoning capabilities ofLLMs. To obtain DePlot, we standardize the plot-to-table task by establishingunified task formats and metrics, and train DePlot end-to-end on this task.DePlot can then be used off-the-shelf together with LLMs in a plug-and-playfashion. Compared with a SOTA model finetuned on more than >28k data points,DePlot+LLM with just one-shot prompting achieves a 24.0% improvement overfinetuned SOTA on human-written queries from the task of chart QA.