Few Shot Image Classification On Imagenet 1 1
평가 지표
Top 1 Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Top 1 Accuracy | Paper Title | Repository |
---|---|---|---|
VIT-H/14 | 62.34 | Scaling Vision with Sparse Mixture of Experts | |
ViT-MoE-15B (Every-2) | 68.66 | Scaling Vision with Sparse Mixture of Experts | |
V-MoE-L/16 (Every-2) | 62.41 | Scaling Vision with Sparse Mixture of Experts | |
MAWS (ViT-6.5B) | 63.6 | The effectiveness of MAE pre-pretraining for billion-scale pretraining | |
MAWS (ViT-2B) | 62.1 | The effectiveness of MAE pre-pretraining for billion-scale pretraining | |
MAWS (ViT-H) | 57.1 | The effectiveness of MAE pre-pretraining for billion-scale pretraining | |
V-MoE-H/14 (Last-5) | 62.95 | Scaling Vision with Sparse Mixture of Experts | |
V-MoE-H/14 (Every-2) | 63.38 | Scaling Vision with Sparse Mixture of Experts |
0 of 8 row(s) selected.