HyperAIHyperAI

Command Palette

Search for a command to run...

Exploring the Limits of Deep Image Clustering using Pretrained Models

Nikolas Adaloglou Felix Michels Hamza Kalisch Markus Kollmann

Abstract

We present a general methodology that learns to classify images without labels by leveraging pretrained feature extractors. Our approach involves self-distillation training of clustering heads based on the fact that nearest neighbours in the pretrained feature space are likely to share the same label. We propose a novel objective that learns associations between image features by introducing a variant of pointwise mutual information together with instance weighting. We demonstrate that the proposed objective is able to attenuate the effect of false positive pairs while efficiently exploiting the structure in the pretrained feature space. As a result, we improve the clustering accuracy over kkk-means on 171717 different pretrained models by 6.16.16.1% and 12.212.212.2% on ImageNet and CIFAR100, respectively. Finally, using self-supervised vision transformers, we achieve a clustering accuracy of 61.661.661.6% on ImageNet. The code is available at https://github.com/HHU-MMBS/TEMI-official-BMVC2023.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp