HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Image Clustering
Image Clustering On Imagenet Dog 15
Image Clustering On Imagenet Dog 15
Metriken
ARI
Accuracy
Backbone
NMI
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
ARI
Accuracy
Backbone
NMI
Paper Title
Repository
DPAC
0.598
0.726
ResNet-34
0.667
Deep Online Probability Aggregation Clustering
-
C3
0.28
0.434
-
0.448
C3: Cross-instance guided Contrastive Clustering
DAC
-
0.275
-
0.219
Deep Adaptive Image Clustering
DCCM
-
0.383
-
0.321
Deep Comprehensive Correlation Mining for Image Clustering
MiCE
0.286
0.439
-
0.423
MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
DEC
-
0.195
-
0.122
Unsupervised Deep Embedding for Clustering Analysis
MAE-CT (best)
0.879
0.943
ViT-H/16
0.904
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
CC
0.274
0.429
-
0.445
Contrastive Clustering
ConCURL
0.531
0.695
-
0.63
Representation Learning for Clustering via Building Consensus
TCL
0.516
0.644
-
0.623
Twin Contrastive Learning for Online Clustering
ProPos*
0.675
0.775
ResNet-34
0.737
Learning Representation for Clustering via Prototype Scattering and Positive Sampling
IDFD
0.413
0.591
-
0.546
Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation
VAE
-
0.179
-
0.107
Auto-Encoding Variational Bayes
PRO-DSC
-
0.840
-
0.812
Exploring a Principled Framework For Deep Subspace Clustering
GAN
-
0.174
-
0.121
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
SPICE
0.526
0.675
-
0.627
SPICE: Semantic Pseudo-labeling for Image Clustering
CoHiClust
0.232
0.355
ResNet-50
0.411
Contrastive Hierarchical Clustering
JULE
-
0.138
-
0.054
Joint Unsupervised Learning of Deep Representations and Image Clusters
MAE-CT (mean)
0.821
0.874
ViT-H/16
0.882
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
ProPos
0.627
0.745
ResNet-34
0.692
Learning Representation for Clustering via Prototype Scattering and Positive Sampling
0 of 20 row(s) selected.
Previous
Next