HyperAIHyperAI
2 months ago

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search

Gao, Chenyang ; Cai, Guanyu ; Jiang, Xinyang ; Zheng, Feng ; Zhang, Jun ; Gong, Yifei ; Peng, Pai ; Guo, Xiaowei ; Sun, Xing
Contextual Non-Local Alignment over Full-Scale Representation for
  Text-Based Person Search
Abstract

Text-based person search aims at retrieving target person in an image galleryusing a descriptive sentence of that person. It is very challenging since modalgap makes effectively extracting discriminative features more difficult.Moreover, the inter-class variance of both pedestrian images and descriptionsis small. So comprehensive information is needed to align visual and textualclues across all scales. Most existing methods merely consider the localalignment between images and texts within a single scale (e.g. only globalscale or only partial scale) then simply construct alignment at each scaleseparately. To address this problem, we propose a method that is able toadaptively align image and textual features across all scales, called NAFS(i.e.Non-local Alignment over Full-Scale representations). Firstly, a novelstaircase network structure is proposed to extract full-scale image featureswith better locality. Secondly, a BERT with locality-constrained attention isproposed to obtain representations of descriptions at different scales. Then,instead of separately aligning features at each scale, a novel contextualnon-local attention mechanism is applied to simultaneously discover latentalignments across all scales. The experimental results show that our methodoutperforms the state-of-the-art methods by 5.53% in terms of top-1 and 5.35%in terms of top-5 on text-based person search dataset. The code is available athttps://github.com/TencentYoutuResearch/PersonReID-NAFS

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search | Latest Papers | HyperAI