HyperAIHyperAI
2 months ago

Class-agnostic Object Detection with Multi-modal Transformer

Maaz, Muhammad ; Rasheed, Hanoona ; Khan, Salman ; Khan, Fahad Shahbaz ; Anwer, Rao Muhammad ; Yang, Ming-Hsuan
Class-agnostic Object Detection with Multi-modal Transformer
Abstract

What constitutes an object? This has been a long-standing question incomputer vision. Towards this goal, numerous learning-free and learning-basedapproaches have been developed to score objectness. However, they generally donot scale well across new domains and novel objects. In this paper, we advocatethat existing methods lack a top-down supervision signal governed byhuman-understandable semantics. For the first time in literature, wedemonstrate that Multi-modal Vision Transformers (MViT) trained with alignedimage-text pairs can effectively bridge this gap. Our extensive experimentsacross various domains and novel objects show the state-of-the-art performanceof MViTs to localize generic objects in images. Based on the observation thatexisting MViTs do not include multi-scale feature processing and usuallyrequire longer training schedules, we develop an efficient MViT architectureusing multi-scale deformable attention and late vision-language fusion. We showthe significance of MViT proposals in a diverse range of applications includingopen-world object detection, salient and camouflage object detection,supervised and self-supervised detection tasks. Further, MViTs can adaptivelygenerate proposals given a specific language query and thus offer enhancedinteractability. Code: \url{https://git.io/J1HPY}.

Class-agnostic Object Detection with Multi-modal Transformer | Latest Papers | HyperAI