Few Shot Image Classification On Imagenet 10
Métriques
Top 1 Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Top 1 Accuracy | Paper Title | Repository |
---|---|---|---|
MAWS (ViT-H) | 82.5 | The effectiveness of MAE pre-pretraining for billion-scale pretraining | |
ViT-MoE-15B (Every-2) | 84.29 | Scaling Vision with Sparse Mixture of Experts | |
MAWS (ViT-2B) | 83.7 | The effectiveness of MAE pre-pretraining for billion-scale pretraining | |
V-MoE-H/14 (Last-5) | 80.1 | Scaling Vision with Sparse Mixture of Experts | |
MAWS (ViT-6.5B) | 84.6 | The effectiveness of MAE pre-pretraining for billion-scale pretraining | |
V-MoE-H/14 (Every-2) | 80.33 | Scaling Vision with Sparse Mixture of Experts | |
VIT-H/14 | 79.01 | Scaling Vision with Sparse Mixture of Experts |
0 of 7 row(s) selected.