Domain Generalization On Vizwiz
المقاييس
Accuracy - All Images
Accuracy - Clean Images
Accuracy - Corrupted Images
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | Accuracy - All Images | Accuracy - Clean Images | Accuracy - Corrupted Images |
---|---|---|---|
efficientnet-rethinking-model-scaling-for | 41.7 | 46.4 | 35.6 |
190411486 | 35.6 | 39.5 | 28.5 |
adversarial-examples-improve-image | 48.1 | 51.4 | 42.5 |
190411486 | 41 | 45.8 | 34.8 |
190411486 | 41.5 | 45.3 | 35.2 |
deep-residual-learning-for-image-recognition | 46.3 | 50.1 | 40.5 |
aggregated-residual-transformations-for-deep | 51.7 | 54.8 | 48.1 |
imagenet-trained-cnns-are-biased-towards | 25.3 | 30 | 20.4 |
augmix-a-simple-data-processing-method-to | 42.2 | 46.4 | 35.9 |
efficientnet-rethinking-model-scaling-for | 38.1 | 42.8 | 31.4 |
very-deep-convolutional-networks-for-large | 34.7 | 39.5 | 28.5 |
190411486 | 38.3 | 42.8 | 32.4 |
190411486 | 38.7 | 42.7 | 32 |
measuring-robustness-to-natural-distribution | 38.8 | 43.5 | 32.5 |
adversarial-examples-improve-image | 49.1 | 51.7 | 44 |
measuring-robustness-to-natural-distribution | 35.7 | 39.6 | 30.2 |
adversarial-examples-improve-image | 49.6 | 53.2 | 44.7 |
very-deep-convolutional-networks-for-large | 32.9 | 37.1 | 25.8 |
autoaugment-learning-augmentation-strategies | 44.3 | 48.6 | 38.2 |
volo-vision-outlooker-for-visual-recognition | 57.2 | 59.7 | 51.8 |
measuring-robustness-to-natural-distribution | 36.5 | 40.9 | 30.7 |
an-image-is-worth-16x16-words-transformers-1 | - | 450 | - |
adversarial-examples-improve-image | 42.4 | 46.7 | 36.2 |
190411486 | 40.3 | 45.1 | 33.4 |
a-convnet-for-the-2020s | 53.5 | 56 | 46.9 |
autoaugment-learning-augmentation-policies | 34.9 | 40.1 | 27.3 |
190411486 | 22.7 | 26.8 | 18.4 |
deep-residual-learning-for-image-recognition | 42.9 | 47.7 | 37.1 |
adversarial-examples-improve-image | 50.5 | 53.2 | 45.8 |
190411486 | 40 | 44.7 | 34.3 |
deep-residual-learning-for-image-recognition | 47.5 | 51.3 | 43.3 |
190411486 | 34.5 | 39.4 | 27.8 |
190411486 | 38.3 | 43.1 | 31.7 |
autoaugment-learning-augmentation-policies | 39.7 | 44.4 | 32.8 |
measuring-robustness-to-natural-distribution | 36.4 | 40.6 | 30.2 |
the-many-faces-of-robustness-a-critical | 41.3 | 46 | 34.9 |
190411486 | 38.3 | 42.9 | 31.9 |
measuring-robustness-to-natural-distribution | 37.4 | 41.4 | 30.9 |
very-deep-convolutional-networks-for-large | 36.7 | 41.1 | 31.1 |
the-many-faces-of-robustness-a-critical | 40.3 | 44.5 | 34.1 |
adversarial-examples-improve-image | 44.3 | 48 | 38.2 |
190411486 | 37.2 | 41.8 | 31.3 |
very-deep-convolutional-networks-for-large | 32.4 | 36.5 | 26.4 |
very-deep-convolutional-networks-for-large | 31.5 | 36.1 | 25.2 |
autoaugment-learning-augmentation-strategies | 45 | 49.9 | 39.1 |
an-image-is-worth-16x16-words-transformers-1 | 49 | - | - |
190411486 | 37.2 | 42.5 | 29.9 |
very-deep-convolutional-networks-for-large | 36.2 | 40.8 | 29.4 |
autoaugment-learning-augmentation-strategies | 45.8 | 50.7 | 39.3 |
efficientnet-rethinking-model-scaling-for | 36.7 | 41.5 | 30.9 |
measuring-robustness-to-natural-distribution | 36.1 | 40 | 29.7 |
adversarial-examples-improve-image | 49.7 | 52 | 45 |
190411486 | 35.5 | 39.2 | 30.3 |
autoaugment-learning-augmentation-strategies | 45.7 | 50.2 | 39.8 |
measuring-robustness-to-natural-distribution | 38.2 | 42.4 | 32.4 |
autoaugment-learning-augmentation-policies | 41.6 | 45.8 | 34.3 |
190411486 | 35.8 | 40.1 | 29.1 |
imagenet-trained-cnns-are-biased-towards | 39.2 | 44.6 | 32.4 |
190411486 | 41.1 | 45.2 | 35.1 |
190411486 | 35.5 | 40.1 | 28.7 |
190411486 | 23.1 | 26.8 | 17.5 |
measuring-robustness-to-natural-distribution | 36.5 | 41.3 | 30.3 |
very-deep-convolutional-networks-for-large | 33.7 | 38.4 | 28.3 |
adversarial-training-for-free | 26.7 | 30.9 | 20.5 |
efficientnet-rethinking-model-scaling-for | 42.8 | 47.3 | 37 |
190411486 | 22.8 | 26.8 | 18.2 |
imagenet-trained-cnns-are-biased-towards | 38.2 | 42.7 | 32.5 |
resnet-strikes-back-an-improved-training | 48.9 | 44.4 | 39.1 |
very-deep-convolutional-networks-for-large | 34.7 | 39.3 | 29 |
measuring-robustness-to-natural-distribution | 38.8 | 42.9 | 33.6 |
efficientnet-rethinking-model-scaling-for | 40.7 | 45.3 | 34.2 |
measuring-robustness-to-natural-distribution | 35.9 | 39.9 | 30.3 |
190411486 | 33.5 | 38.5 | 26.7 |
randaugment-practical-data-augmentation-with | 42.1 | 47.3 | 35.5 |
adversarial-examples-improve-image | 40.5 | 44.9 | 34.2 |
measuring-robustness-to-natural-distribution | 38.3 | 42.7 | 31.4 |
efficientnet-rethinking-model-scaling-for | 34.2 | 38.4 | 27.4 |
measuring-robustness-to-natural-distribution | 30.2 | 34.3 | 24.3 |
190411486 | 41.7 | 46.1 | 35.7 |
autoaugment-learning-augmentation-policies | 42.6 | 47.5 | 34.9 |
190411486 | 36.9 | 42.1 | 30.6 |
190411486 | 38.3 | 42.8 | 32.3 |
bag-of-tricks-for-image-classification-with | 39.7 | 43.5 | 35.8 |
190411486 | 37 | 41.7 | 30.8 |
adversarial-examples-improve-image | 45.5 | 49.5 | 39.8 |
measuring-robustness-to-natural-distribution | 32.7 | 36.6 | 28.3 |
190411486 | 36 | 40.3 | 30.4 |
190411486 | 35.1 | 40 | 28.2 |
190411486 | 34.7 | 38.9 | 27.7 |
randaugment-practical-data-augmentation-with | 45 | 48.7 | 38.9 |