Image Classification On Inaturalist 2018
評価指標
Top-1 Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
比較表
モデル名 | Top-1 Accuracy |
---|---|
a-continual-development-methodology-for-large | 80.97 |
clusterfit-improving-generalization-of-visual | 49.7% |
barlow-twins-self-supervised-learning-via | 46.5 |
levit-a-vision-transformer-in-convnet-s | 66.9% |
grafit-learning-fine-grained-image | 69.8% |
the-majority-can-help-the-minority-context | 74.0% |
going-deeper-with-image-transformers | 78% |
boosting-discriminative-visual-representation | 70.54% |
generalized-parametric-contrastive-learning | 78.1% |
long-tailed-recognition-by-routing-diverse-1 | 72.2% |
class-balanced-loss-based-on-effective-number | 69.05% |
incorporating-convolution-designs-into-visual | 72.2% |
incorporating-convolution-designs-into-visual | 79.4% |
levit-a-vision-transformer-in-convnet-s | 55.2% |
grafit-learning-fine-grained-image | 81.2% |
automix-unveiling-the-power-of-mixup | 64.73% |
hiera-a-hierarchical-vision-transformer | 87.3% |
revisiting-weakly-supervised-pre-training-of | 86.0% |
levit-a-vision-transformer-in-convnet-s | 66.2% |
levit-a-vision-transformer-in-convnet-s | 60.4% |
feature-space-augmentation-for-long-tailed | 65.91% |
metaformer-a-unified-meta-framework-for-fine | 84.3% |
incorporating-convolution-designs-into-visual | 64.3% |
class-balanced-loss-based-on-effective-number | 67.98% |
boosting-discriminative-visual-representation | 64.84% |
densenets-reloaded-paradigm-shift-beyond | 77.0 |
unsupervised-learning-of-visual-features-by | 48.6 |
test-agnostic-long-tailed-recognition-by-test | 72.9% |
class-balanced-distillation-for-long-tailed | 73.6% |
masked-autoencoders-are-scalable-vision | 86.8% |
mixmim-mixed-and-masked-image-modeling-for | 80.3% |
vl-ltr-learning-class-wise-visual-linguistic | 74.6% |
the-effectiveness-of-mae-pre-pretraining-for | 91.3% |
three-things-everyone-should-know-about | 75.3% |
omnivec2-a-novel-transformer-based-network | 94.6 |
class-balanced-distillation-for-long-tailed | 75.3% |
resmlp-feedforward-networks-for-image | 64.3 |
class-balanced-loss-based-on-effective-number | 64.16% |
training-data-efficient-image-transformers | 79.5% |
incorporating-convolution-designs-into-visual | 73.3% |
parametric-contrastive-learning | 75.2% |
metasaug-meta-semantic-augmentation-for-long | 68.75% |
feature-space-augmentation-for-long-tailed | 69.08% |
densenets-reloaded-paradigm-shift-beyond | 81.8% |
densenets-reloaded-paradigm-shift-beyond | 79.1 |
automix-unveiling-the-power-of-mixup | 70.49% |
internimage-exploring-large-scale-vision | 92.6% |
disentangling-label-distribution-for-long | 70.0% |
densenets-reloaded-paradigm-shift-beyond | 80.5 |
omnivec-learning-robust-representations-with | 93.8 |
feature-space-augmentation-for-long-tailed | 68.39% |
omnivore-a-single-model-for-many-visual | 84.1% |
levit-a-vision-transformer-in-convnet-s | 54% |
vl-ltr-learning-class-wise-visual-linguistic | 81.0% |
metaformer-a-unified-meta-framework-for-fine | 88.7% |
generalized-parametric-contrastive-learning | 75.4% |
vision-models-are-more-robust-and-fair-when | 84.7% |
the-inaturalist-species-classification-and | 60.20% |
mixmim-mixed-and-masked-image-modeling-for | 77.5% |
resmlp-feedforward-networks-for-image | 60.2 |