HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Image Clustering
Image Clustering On Imagenet
Image Clustering On Imagenet
Metrics
ARI
Accuracy
NMI
Results
Performance results of various models on this benchmark
Columns
Model Name
ARI
Accuracy
NMI
Paper Title
Repository
MIM-Refiner (D2V2-ViT-H/14)
42.2
67.3
87.2
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
TEMI DINO (ViT-B)
45.9
58.0
81.4
Exploring the Limits of Deep Image Clustering using Pretrained Models
TURTLE (CLIP + DINOv2)
62.5
72.9
88.2
Let Go of Your Labels with Unsupervised Transfer
MIM-Refiner (MAE-ViT-H/14)
45.5
64.6
85.3
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations
TEMI MSN (ViT-L)
48.4
61.6
82.5
Exploring the Limits of Deep Image Clustering using Pretrained Models
MAE-CT (ViT-H/16 best)
-
58.0
81.8
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
SCAN
-
39.9
72.0
SCAN: Learning to Classify Images without Labels
SeLa
-
-
66.4
Self-labelling via simultaneous clustering and representation learning
SeCu
41.9
53.5
79.4
Stable Cluster Discrimination for Deep Clustering
CoKe
35.6
47.6
76.2
Stable Cluster Discrimination for Deep Clustering
PRO-DSC
-
65.0
83.4
Exploring a Principled Framework For Deep Subspace Clustering
MAE-CT (ViT-H/16 mean)
-
57.1
81.7
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
0 of 12 row(s) selected.
Previous
Next