HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Offene Vokabular-Semantische Segmentierung
Open Vocabulary Semantic Segmentation On Coco
Open Vocabulary Semantic Segmentation On Coco
Metriken
mIoU
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
mIoU
Paper Title
TTD (TCL)
23.7
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
LaVG
23.2
In Defense of Lazy Visual Grounding for Open-Vocabulary Semantic Segmentation
CLIP Surgery (original CLIP without any fine-tuning)
21.9
A Closer Look at the Explainability of Contrastive Language-Image Pre-training
TTD (MaskCLIP)
19.4
TTD: Text-Tag Self-Distillation Enhancing Image-Text Alignment in CLIP to Alleviate Single Tag Bias
POMP
-
Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition
ZegFormer
-
Decoupling Zero-Shot Semantic Segmentation
ZSSeg
-
A Simple Baseline for Open-Vocabulary Semantic Segmentation with Pre-trained Vision-language Model
0 of 7 row(s) selected.
Previous
Next
Open Vocabulary Semantic Segmentation On Coco | SOTA | HyperAI