HyperAIHyperAI
2 months ago

HTR-VT: Handwritten Text Recognition with Vision Transformer

Li, Yuting ; Chen, Dexiong ; Tang, Tinglong ; Shen, Xi
HTR-VT: Handwritten Text Recognition with Vision Transformer
Abstract

We explore the application of Vision Transformer (ViT) for handwritten textrecognition. The limited availability of labeled data in this domain poseschallenges for achieving high performance solely relying on ViT. Previoustransformer-based models required external data or extensive pre-training onlarge datasets to excel. To address this limitation, we introduce adata-efficient ViT method that uses only the encoder of the standardtransformer. We find that incorporating a Convolutional Neural Network (CNN)for feature extraction instead of the original patch embedding and employSharpness-Aware Minimization (SAM) optimizer to ensure that the model canconverge towards flatter minima and yield notable enhancements. Furthermore,our introduction of the span mask technique, which masks interconnectedfeatures in the feature map, acts as an effective regularizer. Empirically, ourapproach competes favorably with traditional CNN-based models on small datasetslike IAM and READ2016. Additionally, it establishes a new benchmark on the LAMdataset, currently the largest dataset with 19,830 training text lines. Thecode is publicly available at: https://github.com/YutingLi0606/HTR-VT.

HTR-VT: Handwritten Text Recognition with Vision Transformer | Latest Papers | HyperAI