HyperAI
HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Segmentation sémantique non supervisée avec pré-entraînement image-langue
Unsupervised Semantic Segmentation With 8
Unsupervised Semantic Segmentation With 8
Métriques
mIoU
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
mIoU
Paper Title
Repository
MaskCLIP
26.4
Extract Free Dense Labels from CLIP
-
TTD (MaskCLIP)
31.0
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
-
TagAlign
37.6
TagAlign: Improving Vision-Language Alignment with Multi-Tag Classification
-
TCL
33.9
Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs
-
ReCo
22.3
ReCo: Retrieve and Co-segment for Zero-shot Transfer
-
ProxyCLIP
39.6
ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation
-
TTD (TCL)
37.4
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
-
COSMOS ViT-B/16
33.7
COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
-
GroupViT (RedCaps)
23.4
GroupViT: Semantic Segmentation Emerges from Text Supervision
-
Trident
44.3
Harnessing Vision Foundation Models for High-Performance, Training-Free Open Vocabulary Segmentation
-
0 of 10 row(s) selected.
Previous
Next
Unsupervised Semantic Segmentation With 8 | SOTA | HyperAI