HyperAI
HyperAI
Startseite
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Seite durchsuchen…
⌘
K
Startseite
SOTA
Unüberwachte semantische Segmentierung mit Sprachbild-Vortraining
Unsupervised Semantic Segmentation With 11
Unsupervised Semantic Segmentation With 11
Metriken
mIoU
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
mIoU
Paper Title
Repository
TTD (TCL)
61.1
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
TCL
55.0
Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs
CLS-SEG
68.7
TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training
Trident
70.8
Harnessing Vision Foundation Models for High-Performance, Training-Free Open Vocabulary Segmentation
MaskCLIP
29.3
Extract Free Dense Labels from CLIP
TagAlign
53.9
TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification
TTD (MaskCLIP)
43.1
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
ProxyCLIP
65.0
ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation
0 of 8 row(s) selected.
Previous
Next