HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Generalized Referring Expression Segmentation
Generalized Referring Expression Segmentation
Generalized Referring Expression Segmentation
Metrics
cIoU
gIoU
Results
Performance results of various models on this benchmark
Columns
Model Name
cIoU
gIoU
Paper Title
Repository
HDC
65.42
68.28
CoHD: A Counting-Aware Hierarchical Decoding Framework for Generalized Referring Expression Segmentation
LTS
52.30
52.70
Locate then Segment: A Strong Pipeline for Referring Image Segmentation
-
GROUNDHOG
-
66.70
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation
-
VLT
52.51
52.00
Vision-Language Transformer and Query Generation for Referring Segmentation
CRIS
55.34
56.27
CRIS: CLIP-Driven Referring Image Segmentation
MABP
65.69
68.79
Bring Adaptive Binding Prototypes to Generalized Referring Expression Segmentation
GSVA-Vicuna-13B-v1.1
64.05
68.01
GSVA: Generalized Segmentation via Multimodal Large Language Models
GSVA-Vicuna-7B-v1.1
63.29
66.47
GSVA: Generalized Segmentation via Multimodal Large Language Models
GSVA-Llama2-13B
66.38
70.04
GSVA: Generalized Segmentation via Multimodal Large Language Models
ReLA
62.42
63.60
GRES: Generalized Referring Expression Segmentation
LAVT
57.64
58.40
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
MattNet
47.51
48.24
MAttNet: Modular Attention Network for Referring Expression Comprehension
0 of 12 row(s) selected.
Previous
Next