Image Classification On Objectnet
المقاييس
Top-1 Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | Top-1 Accuracy |
---|---|
on-mixup-regularization | 28.37 |
leveraging-background-augmentations-to | 20.8 |
optimizing-relevance-maps-of-vision | 47.1 |
pyramid-adversarial-training-improves-vit | 29.3 |
bamboo-building-mega-scale-vision-dataset | 53.9 |
robust-cross-modal-representation-learning | 15.24 |
pyramid-adversarial-training-improves-vit | 24.75 |
combined-scaling-for-zero-shot-transfer | 72.2 |
revisiting-weakly-supervised-pre-training-of | 64.3 |
revisiting-weakly-supervised-pre-training-of | 60 |
objectnet-a-large-scale-bias-controlled | 35.77 |
revisiting-weakly-supervised-pre-training-of | 69.5 |
measuring-the-interpretability-of | 17.71 |
leveraging-background-augmentations-to | 23.9 |
objectnet-a-large-scale-bias-controlled | 32.24 |
objectnet-a-large-scale-bias-controlled | 6.78 |
pyramid-adversarial-training-improves-vit | 29.95 |
leveraging-background-augmentations-to | 21.9 |
the-effectiveness-of-mae-pre-pretraining-for | 72.6 |
measuring-the-interpretability-of | 12.23 |
scaling-vision-transformers | 68.5 |
pyramid-adversarial-training-improves-vit | 30.11 |
pyramid-adversarial-training-improves-vit | 25.9 |
bamboo-building-mega-scale-vision-dataset | 38.8 |
pyramid-adversarial-training-improves-vit | 28.72 |
learning-transferable-visual-models-from | 72.3 |
data-determines-distributional-robustness-in | 18.70 |
pyramid-adversarial-training-improves-vit | 32.92 |
improving-robustness-against-common | 28.5 |
objectnet-a-large-scale-bias-controlled | 29.59 |
revisiting-weakly-supervised-pre-training-of | 48.9 |
matryoshka-representations-for-adaptive | 51.6 |
pyramid-adversarial-training-improves-vit | 34.83 |
discrete-representations-strengthen-vision-1 | 46.62 |
pyramid-adversarial-training-improves-vit | 29.41 |
optimizing-relevance-maps-of-vision | 39.3 |
lit-zero-shot-transfer-with-locked-image-text | 82.5 |
optimizing-relevance-maps-of-vision | 46.5 |
pyramid-adversarial-training-improves-vit | 30.98 |
measuring-the-interpretability-of | 20.61 |
pyramid-adversarial-training-improves-vit | 34.12 |
pushing-the-limits-of-self-supervised-resnets | 14.6 |
pyramid-adversarial-training-improves-vit | 17.36 |
vision-models-are-more-robust-and-fair-when | 60.2 |
measuring-the-interpretability-of | 19.73 |
dilemma-self-supervised-shape-and-texture | 20.51 |
optimizing-relevance-maps-of-vision | 43.2 |
pyramid-adversarial-training-improves-vit | 25.65 |
the-effectiveness-of-mae-pre-pretraining-for | 75.8 |
billion-scale-pretraining-with-vision | 50.7 |
pyramid-adversarial-training-improves-vit | 49.39 |
pyramid-adversarial-training-improves-vit | 28.6 |
combined-scaling-for-zero-shot-transfer | 82.3 |
pushing-the-limits-of-self-supervised-resnets | 25.9 |
billion-scale-pretraining-with-vision | 48.4 |
optimizing-relevance-maps-of-vision | 35.1 |
objectnet-a-large-scale-bias-controlled | 35.63 |
optimizing-relevance-maps-of-vision | 37.4 |
measuring-the-interpretability-of | 12.67 |
optimal-representations-for-covariate-shift-1 | 42.10 |
compressive-visual-representations | 20.8 |
billion-scale-pretraining-with-vision | 42.5 |
pyramid-adversarial-training-improves-vit | 30.28 |
eva-clip-improved-training-techniques-for | 79.6 |
a-whac-a-mole-dilemma-shortcuts-come-in | 60.78 |
pyramid-adversarial-training-improves-vit | 46.68 |
optimal-representations-for-covariate-shift-1 | 42.80 |
optimizing-relevance-maps-of-vision | 28.3 |
objectnet-a-large-scale-bias-controlled | 19.13 |
optimizing-relevance-maps-of-vision | 31.4 |
pyramid-adversarial-training-improves-vit | 37.41 |
robust-fine-tuning-of-zero-shot-models | 72.1 |
large-scale-learning-of-general-visual | 58.7 |
revisiting-weakly-supervised-pre-training-of | 57.3 |
billion-scale-pretraining-with-vision | 49.1 |
generative-interventions-for-causal-learning | 39.38 |
pyramid-adversarial-training-improves-vit | 35.59 |
the-effectiveness-of-mae-pre-pretraining-for | 77.9 |
compressive-visual-representations | 25.5 |
optimizing-relevance-maps-of-vision | 34.3 |
pyramid-adversarial-training-improves-vit | 47.53 |
pali-a-jointly-scaled-multilingual-language | 72.0 |
context-gated-convolution | 31.53 |
optimizing-relevance-maps-of-vision | 36.3 |
class-agnostic-object-detection | 13.2 |
optimizing-relevance-maps-of-vision | 41.4 |
scaling-vision-transformers | 70.53 |
large-scale-learning-of-general-visual | 47.0 |
improving-robustness-against-common | 29.2 |
recurrent-parameter-generators | 16.5 |
an-image-is-worth-16x16-words-transformers-1 | - |
optimizing-relevance-maps-of-vision | 31.6 |
coca-contrastive-captioners-are-image-text | 82.7 |
self-supervised-learning-for-large-scale | 4.92 |
pyramid-adversarial-training-improves-vit | 21.61 |
optimizing-relevance-maps-of-vision | 42.2 |
optimizing-relevance-maps-of-vision | 52.0 |
large-scale-learning-of-general-visual | 36.0 |
model-soups-averaging-weights-of-multiple | 79.03 |
model-soups-averaging-weights-of-multiple | 78.52 |
pushing-the-limits-of-self-supervised-resnets | 23.8 |
generative-interventions-for-causal-learning | 27.03 |
improving-robustness-against-common | 29.2 |
pushing-the-limits-of-self-supervised-resnets | 23 |
pyramid-adversarial-training-improves-vit | 39.79 |
measuring-the-interpretability-of | 12.64 |