HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Semantic Segmentation
Semantic Segmentation On Pascal Context
Semantic Segmentation On Pascal Context
Metrics
mIoU
Results
Performance results of various models on this benchmark
Columns
Model Name
mIoU
Paper Title
VPNeXt
71.1
VPNeXt -- Rethinking Dense Decoding for Plain Vision Transformer
PlainSeg (EVA-02-L)
71.0
Minimalist and High-Performance Semantic Segmentation with Plain Vision Transformers
InternImage-H
70.3
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
RSSeg-ViT-L (BEiT pretrain)
68.9
Representation Separation for Semantic Segmentation with Vision Transformers
ViT-Adapter-L (Mask2Former, BEiT pretrain)
68.2
Vision Transformer Adapter for Dense Predictions
ViT-Adapter-L (UperNet, BEiT pretrain)
67.5
Vision Transformer Adapter for Dense Predictions
RSSeg-ViT-L
67.5
Representation Separation for Semantic Segmentation with Vision Transformers
SegViT (ours)
65.3
SegViT: Semantic Segmentation with Plain Vision Transformers
CAA + CAR (ConvNeXt-Large + JPU)
64.1
CAR: Class-aware Regularizations for Semantic Segmentation
SenFormer (Swin-L)
64.0
Efficient Self-Ensemble for Semantic Segmentation
Sequential Ensemble (Segformer + HRNet)
62.1
Sequential Ensembling for Semantic Segmentation
CAA + Simple decoder (Efficientnet-B7)
60.5
Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation
DPT-Hybrid
60.46
Vision Transformers for Dense Prediction
CAA (Efficientnet-B7)
60.1
Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation
HRNetV2 + OCR + RMI (PaddleClas pretrained)
59.6
Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation
Seg-L-Mask/16
59.0
Segmenter: Transformer for Semantic Segmentation
ResNeSt-269
58.9
ResNeSt: Split-Attention Networks
DEPICT-SA (ViT-L multi-scale)
58.6
Rethinking Decoders for Transformer-based Semantic Segmentation: A Compression Perspective
ResNeSt-200
58.4
ResNeSt: Split-Attention Networks
DEPICT-SA (ViT-L single-scale)
57.9
Rethinking Decoders for Transformer-based Semantic Segmentation: A Compression Perspective
0 of 66 row(s) selected.
Previous
Next