HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Classification d'images
Image Classification On Inaturalist 2018
Image Classification On Inaturalist 2018
Métriques
Top-1 Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Top-1 Accuracy
Paper Title
Repository
µ2Net+ (ViT-L/16)
80.97
A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems
ResNet-50
49.7%
ClusterFit: Improving Generalization of Visual Representations
Barlow Twins (ResNet-50)
46.5
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
LeViT-384
66.9%
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
ResNet-50
69.8%
Grafit: Learning fine-grained image representations with coarse labels
-
BS-CMO (ResNet-50)
74.0%
The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification
CaiT-M-36 U 224
78%
Going deeper with Image Transformers
ResNeXt-101 (SAMix)
70.54%
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
GPaCo (ResNet-152)
78.1%
Generalized Parametric Contrastive Learning
RIDE (ResNet-50)
72.2%
Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
ResNet-152
69.05%
Class-Balanced Loss Based on Effective Number of Samples
CeiT-T (384 finetune resolution)
72.2%
Incorporating Convolution Designs into Visual Transformers
CeiT-S (384 finetune resolution)
79.4%
Incorporating Convolution Designs into Visual Transformers
LeViT-128S
55.2%
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
RegNet-8GF
81.2%
Grafit: Learning fine-grained image representations with coarse labels
-
ResNet-50 (AutoMix)
64.73%
AutoMix: Unveiling the Power of Mixup for Stronger Classifiers
Hiera-H (448px)
87.3%
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
SWAG (ViT H/14)
86.0%
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
LeViT-256
66.2%
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
LeViT-192
60.4%
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
0 of 60 row(s) selected.
Previous
Next