Long Tail Learning
Long-tailed learning is one of the most challenging problems in visual recognition, aiming to train high-performance models from a large number of images that follow a long-tailed distribution of categories. The goal of long-tailed learning is to improve the model's ability to recognize minority classes under data imbalance, thereby achieving more equitable and comprehensive performance. The practical value of this task lies in its ability to effectively address skewed data issues in the real world, enhancing the generalization and applicability of the models.
CelebA-5
OPeN (WideResNet-28-10)
CIFAR-10-LT (ρ=10)
TADE
CIFAR-10-LT (ρ=100)
GLMC+MaxNorm (ResNet-34, channel x4)
CIFAR-10-LT (ρ=200)
CIFAR-10-LT (ρ=50)
GLMC + SAM
CIFAR-100-LT (ρ=10)
TADE
CIFAR-100-LT (ρ=100)
LIFT (ViT-B/16, ImageNet-21K pre-training)
CIFAR-100-LT (ρ=200)
PaCo + SAM
CIFAR-100-LT (ρ=50)
LTR-weight-balancing
COCO-MLT
LMPT(ViT-B/16)
EGTEA
CDB-loss (3D- ResNeXt101)
ImageNet-GLT
RIDE + IFL
ImageNet-LT
VL-LTR (ViT-B-16)
iNaturalist 2018
LIFT (ViT-L/14@336px)
Lot-insts
Character-BERT+RS
MIMIC-CXR-LT
Decoupling (cRT)
mini-ImageNet-LT
TailCalibX
NIH-CXR-LT
Places-LT
VOC-MLT