HyperAI
HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Unüberwachte semantische Segmentierung mit Sprachbild-Vortraining
Unsupervised Semantic Segmentation With 10
Unsupervised Semantic Segmentation With 10
Metriken
mIoU
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
mIoU
Paper Title
Repository
CLS-SEG
35.3
TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training
-
ProxyCLIP
39.2
ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation
-
TagAlign
33.3
TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification
-
Trident
42.2
Harnessing Vision Foundation Models for High-Performance, Training-Free Open Vocabulary Segmentation
-
TCL
31.6
Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs
-
COSMOS ViT-B/16
31.3
COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
-
TTD (TCL)
37.4
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
-
TTD (MaskCLIP)
26.5
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
-
MaskCLIP
20.6
Extract Free Dense Labels from CLIP
-
GroupViT (RedCaps)
27.5
GroupViT: Semantic Segmentation Emerges from Text Supervision
-
ReCo
15.7
ReCo: Retrieve and Co-segment for Zero-shot Transfer
-
0 of 11 row(s) selected.
Previous
Next