HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Image Clustering
Image Clustering On Imagenet
Image Clustering On Imagenet
평가 지표
ARI
Accuracy
NMI
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
ARI
Accuracy
NMI
Paper Title
Repository
MIM-Refiner (D2V2-ViT-H/14)
42.2
67.3
87.2
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
TEMI DINO (ViT-B)
45.9
58.0
81.4
Exploring the Limits of Deep Image Clustering using Pretrained Models
TURTLE (CLIP + DINOv2)
62.5
72.9
88.2
Let Go of Your Labels with Unsupervised Transfer
MIM-Refiner (MAE-ViT-H/14)
45.5
64.6
85.3
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
TEMI MSN (ViT-L)
48.4
61.6
82.5
Exploring the Limits of Deep Image Clustering using Pretrained Models
MAE-CT (ViT-H/16 best)
-
58.0
81.8
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
SCAN
-
39.9
72.0
SCAN: Learning to Classify Images without Labels
SeLa
-
-
66.4
Self-labelling via simultaneous clustering and representation learning
SeCu
41.9
53.5
79.4
Stable Cluster Discrimination for Deep Clustering
CoKe
35.6
47.6
76.2
Stable Cluster Discrimination for Deep Clustering
PRO-DSC
-
65.0
83.4
Exploring a Principled Framework For Deep Subspace Clustering
MAE-CT (ViT-H/16 mean)
-
57.1
81.7
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
0 of 12 row(s) selected.
Previous
Next