Star-net: A spatial attention residue network for scene text recognition.
In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net)for recognising scene texts. Our STAR-Net is equipped with a spatial attention mechanism which employs a spatial transformer to remove the distortions of texts in naturalimages. This allows the subsequent feature extractor to focus on the rectified text region without being sidetracked by the distortions. Our STAR-Net also exploits residueconvolutional blocks to build a very deep feature extractor, which is essential to the successful extraction of discriminative text features for this fine grained recognition task.Combining the spatial attention mechanism with the residue convolutional blocks, ourSTAR-Net is the deepest end-to-end trainable neural network for scene text recognition.Experiments have been conducted on five public benchmark datasets. Experimental results show that our STAR-Net can achieve a performance comparable to state-of-the-artmethods for scene texts with little distortions, and outperform these methods for scenetexts with considerable distortions.