HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Langschwanzlernen
Long Tail Learning On Inaturalist 2018
Long Tail Learning On Inaturalist 2018
Metriken
Top-1 Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Top-1 Accuracy
Paper Title
LIFT (ViT-L/14@336px)
87.4%
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
LIFT (ViT-L/14)
85.2%
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
GML (ViT-B-16)
82.1%
Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels
VL-LTR (ViT-B-16)
81.0%
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition
LIFT (ViT-B/16)
80.4%
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
RAC (ViT-B-16)
80.24%
Retrieval Augmented Classification for Long-Tail Visual Recognition
GPaCo (2-R152)
79.8%
Generalized Parametric Contrastive Learning
GPaCo (ResNet-152)
78.1%
Generalized Parametric Contrastive Learning
TADE(ResNet-152)
77%
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
ProCo (ResNet50)
75.8%
Probabilistic Contrastive Learning for Long-Tailed Visual Recognition
MDCS(Resnet50)
75.6%
MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition
GPaCo (ResNet-50)
75.4%
Generalized Parametric Contrastive Learning
CBD-ENS (ResNet-101)
75.3%
Class-Balanced Distillation for Long-Tailed Visual Recognition
PaCo(ResNet-152)
75.2%
Parametric Contrastive Learning
DeiT-LT
75.1%
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
APA (SE-ResNet-50)
74.8
Adaptive Parametric Activation
VL-LTR (ResNet-50)
74.6%
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition
GML (ResNet-50)
74.5%
Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels
NCL(ResNet-50)
74.2%
Nested Collaborative Learning for Long-Tailed Visual Recognition
BatchFormer(ResNet-50, RIDE)
74.1%
BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning
0 of 43 row(s) selected.
Previous
Next
Long Tail Learning On Inaturalist 2018 | SOTA | HyperAI