CoMER: Modeling Coverage for Transformer-based Handwritten Mathematical Expression Recognition

The Transformer-based encoder-decoder architecture has recently madesignificant advances in recognizing handwritten mathematical expressions.However, the transformer model still suffers from the lack of coverage problem,making its expression recognition rate (ExpRate) inferior to its RNNcounterpart. Coverage information, which records the alignment information ofthe past steps, has proven effective in the RNN models. In this paper, wepropose CoMER, a model that adopts the coverage information in the transformerdecoder. Specifically, we propose a novel Attention Refinement Module (ARM) torefine the attention weights with past alignment information without hurtingits parallelism. Furthermore, we take coverage information to the extreme byproposing self-coverage and cross-coverage, which utilize the past alignmentinformation from the current and previous layers. Experiments show that CoMERimproves the ExpRate by 0.61%/2.09%/1.59% compared to the currentstate-of-the-art model, and reaches 59.33%/59.81%/62.97% on the CROHME2014/2016/2019 test sets.