HyperAIHyperAI

Command Palette

Search for a command to run...

Vision Grid Transformer for Document Layout Analysis

Cheng Da Chuwei Luo Qi Zheng Cong Yao

Abstract

Document pre-trained models and grid-based models have proven to be veryeffective on various tasks in Document AI. However, for the document layoutanalysis (DLA) task, existing document pre-trained models, even thosepre-trained in a multi-modal fashion, usually rely on either textual featuresor visual features. Grid-based models for DLA are multi-modality but largelyneglect the effect of pre-training. To fully leverage multi-modal informationand exploit pre-training techniques to learn better representation for DLA, inthis paper, we present VGT, a two-stream Vision Grid Transformer, in which GridTransformer (GiT) is proposed and pre-trained for 2D token-level andsegment-level semantic understanding. Furthermore, a new dataset named D4^44LA,which is so far the most diverse and detailed manually-annotated benchmark fordocument layout analysis, is curated and released. Experiment results haveillustrated that the proposed VGT model achieves new state-of-the-art resultson DLA tasks, e.g. PubLayNet (95.7\%$$\rightarrow$$96.2\%), DocBank(79.6\%$$\rightarrow$$84.1\%), and D4^44LA (67.7\%$$\rightarrow$$68.8\%).The code and models as well as the D4^44LA dataset will be made publiclyavailable ~\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery}.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp