HyperAIHyperAI
2 months ago

Vision-Language Transformer and Query Generation for Referring Segmentation

Ding, Henghui ; Liu, Chang ; Wang, Suchen ; Jiang, Xudong
Vision-Language Transformer and Query Generation for Referring
  Segmentation
Abstract

In this work, we address the challenging task of referring segmentation. Thequery expression in referring segmentation typically indicates the targetobject by describing its relationship with others. Therefore, to find thetarget one among all instances in the image, the model must have a holisticunderstanding of the whole image. To achieve this, we reformulate referringsegmentation as a direct attention problem: finding the region in the imagewhere the query language expression is most attended to. We introducetransformer and multi-head attention to build a network with an encoder-decoderattention mechanism architecture that "queries" the given image with thelanguage expression. Furthermore, we propose a Query Generation Module, whichproduces multiple sets of queries with different attention weights thatrepresent the diversified comprehensions of the language expression fromdifferent aspects. At the same time, to find the best way from thesediversified comprehensions based on visual clues, we further propose a QueryBalance Module to adaptively select the output features of these queries for abetter mask generation. Without bells and whistles, our approach islight-weight and achieves new state-of-the-art performance consistently onthree referring segmentation datasets, RefCOCO, RefCOCO+, and G-Ref. Our codeis available at https://github.com/henghuiding/Vision-Language-Transformer.