HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Zero Shot Semantic Segmentation
Zero Shot Semantic Segmentation On Coco Stuff
Zero Shot Semantic Segmentation On Coco Stuff
Metrics
Inductive Setting hIoU
Transductive Setting hIoU
Results
Performance results of various models on this benchmark
Columns
Model Name
Inductive Setting hIoU
Transductive Setting hIoU
Paper Title
Repository
ZegCLIP
40.8
48.5
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
DeOP
38.2
-
Open-Vocabulary Semantic Segmentation with Decoupled One-Pass Network
CaGNet
18.2
19.5
Context-aware Feature Generation for Zero-shot Semantic Segmentation
STRICT
-
34.8
A Closer Look at Self-training for Zero-Label Semantic Segmentation
OTSeg
41.4
49.5
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation
SPNet
14.0
30.3
Semantic Projection Network for Zero- and Few-Label Semantic Segmentation
MaskCLIP+
-
45.0
Extract Free Dense Labels from CLIP
CLIP-RC
41.2
49.7
Exploring Regional Clues in CLIP for Zero-Shot Semantic Segmentation
MVP-SEG+
-
45.5
MVP-SEG: Multi-View Prompt Learning for Open-Vocabulary Semantic Segmentation
-
zsseg
36.3
41.5
A Simple Baseline for Open-Vocabulary Semantic Segmentation with Pre-trained Vision-language Model
ZegFormer
33.2
-
Decoupling Zero-Shot Semantic Segmentation
OTSeg+
41.5
49.8
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation
FreeSeg
-
45.3
FreeSeg: Free Mask from Interpretable Contrastive Language-Image Pretraining for Semantic Segmentation
-
SIGN
20.9
-
SIGN: Spatial-information Incorporated Generative Network for Generalized Zero-shot Semantic Segmentation
-
ZS5
15.0
16.2
Zero-Shot Semantic Segmentation
0 of 15 row(s) selected.
Previous
Next