HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
인스턴스 세그멘테이션
Instance Segmentation On Ade20K Val
Instance Segmentation On Ade20K Val
평가 지표
AP
APL
APM
APS
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
AP
APL
APM
APS
Paper Title
OneFormer (InternImage-H, emb_dim=1024, single-scale, 896x896, COCO-Pretrained)
44.2
64.3
49.9
23.7
OneFormer: One Transformer to Rule Universal Image Segmentation
OpenSeeD
42.6
-
-
-
A Simple Framework for Open-Vocabulary Segmentation and Detection
OneFormer (DiNAT-L, single-scale, 1280x1280, COCO-pretrain)
40.2
59.7
44.4
19.2
OneFormer: One Transformer to Rule Universal Image Segmentation
X-Decoder (Davit-d5, Deform, single-scale, 1280x1280)
38.7
59.6
43.3
18.9
Generalized Decoding for Pixel, Image, and Language
OneFormer (DiNAT-L, single-scale)
36.0
-
-
-
OneFormer: One Transformer to Rule Universal Image Segmentation
OneFormer (Swin-L, single-scale)
35.9
-
-
-
OneFormer: One Transformer to Rule Universal Image Segmentation
X-Decoder (L)
35.8
-
-
-
Generalized Decoding for Pixel, Image, and Language
DiNAT-L (Mask2Former, single-scale)
35.4
55.5
39.0
16.3
Dilated Neighborhood Attention Transformer
Mask2Former (Swin-L, single-scale)
34.9
54.7
40
16.3
Masked-attention Mask Transformer for Universal Image Segmentation
Mask2Former (Swin-L + FAPN)
33.4
54.6
37.6
14.6
Masked-attention Mask Transformer for Universal Image Segmentation
Mask2Former (ResNet50)
26.4
-
-
10.4
Masked-attention Mask Transformer for Universal Image Segmentation
Mask2Former (ResNet-50)
-
43.1
28.9
-
Masked-attention Mask Transformer for Universal Image Segmentation
0 of 12 row(s) selected.
Previous
Next
Instance Segmentation On Ade20K Val | SOTA | HyperAI초신경