Referring Expression Segmentation
Referring Expression Segmentation aims to perform pixel-level annotation of specific object instances in images or videos through linguistic expressions. This task requires that the referring expression (RE) can uniquely identify the target object in the scene or dialogue, ensuring the accuracy and uniqueness of the annotation. This technology has significant application value in human-computer interaction, image editing, and content understanding.
A2D Sentences
SgMg (Video-Swin-B)
A2Dre test
RefVos
CLEVR-Ref+
IEP-Ref (700K prog.)
DAVIS 2017 (val)
RefVOS
G-Ref test B
G-Ref val
J-HMDB
SgMg (Video-Swin-B)
PhraseCut
MDETR ENB3
RefCOCO
DETRIS
RefCOCO+ test B
RefCOCO testA
RefCOCO+ testA
HyperSeg
RefCOCO testB
EVP
RefCoCo val
CRIS
RefCOCO+ val
HyperSeg
RefCOCOg-test
UniLSeg-100
RefCOCOg-val
MLCD-Seg-7B
Refer-YouTube-VOS
RefVOS-Human REs
Refer-YouTube-VOS (2021 public validation)
GLEE-Pro
ReferIt
PolyFormer-L
Referring Expressions for DAVIS 2016 & 2017
MUTR