Fine Grained Image Classification On Stanford
Metriken
Accuracy
PARAMS
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy | PARAMS |
---|---|---|
training-data-efficient-image-transformers | 93.3% | 86M |
on-the-eigenvalues-of-global-covariance | 94.6% | - |
selective-sparse-sampling-for-fine-grained | 94.7% | - |
fine-grained-visual-classification-via | 95.1% | - |
progressive-co-attention-network-for-fine | 94.6% | - |
advancing-fine-grained-classification-by | 95.72 | - |
three-things-everyone-should-know-about | 93.8% | - |
alignment-enhancement-network-for-fine | 94.0% | - |
resmlp-feedforward-networks-for-image | 84.6% | - |
sr-gnn-spatial-relation-aware-graph-neural | 96.1 | 30.9 |
densenets-reloaded-paradigm-shift-beyond | 93.9% | 24M |
resnet-strikes-back-an-improved-training | 92.7% | 24M |
non-binary-deep-transfer-learning-for | 95.35% | - |
see-better-before-looking-closer-weakly | 94.5% | - |
re-rank-coarse-classification-with-local | 95.5% | - |
multi-granularity-part-sampling-attention-for | 95.4% | - |
channel-interaction-networks-for-fine-grained-1 | 94.5% | - |
weakly-supervised-fine-grained-image-1 | 94.8% | - |
multiscale-patch-based-feature-graphs-for | 86.79 | - |
transfg-a-transformer-architecture-for-fine | 94.8% | - |
your-labrador-is-my-dog-fine-grained-or-not | 95.1% | - |
autoaugment-learning-augmentation-policies | 94.8% | - |
densenets-reloaded-paradigm-shift-beyond | 94.2% | 186M |
grad-cam-guided-channel-spatial-attention | 94.41% | - |
deep-cnns-with-spatially-weighted-pooling-for | 93.1% | - |
dual-cross-attention-learning-for-fine | 95.3% | - |
grafit-learning-fine-grained-image | 94.7% | - |
context-aware-attentional-pooling-cap-for | 95.7% | - |
autoformer-searching-transformers-for-visual | 93.4% | - |
three-branch-and-mutil-scale-learning-for | 95.0% | - |
vit-net-interpretable-vision-transformers | 95.0% | - |
efficientnet-rethinking-model-scaling-for | 94.7% | - |
neural-architecture-transfer | 92.9% | 3.7M |
neural-architecture-transfer | 92.2% | 2.7M |
learning-to-navigate-for-fine-grained | 93.9% | - |
towards-class-specific-unit | 95.2% | - |
looking-for-the-devil-in-the-details-learning | 93.8% | - |
densenets-reloaded-paradigm-shift-beyond | 94.2% | 50M |
learning-semantically-enhanced-feature-for | 94.0% | - |
align-yourself-self-supervised-pre-training | 89.76% | - |
advisingnets-learning-to-distinguish-correct | 91.06% | - |
progressive-multi-task-anti-noise-learning | 97.3% | - |
look-into-object-self-supervised-structure | 94.5% | - |
context-semantic-quality-awareness-network | 95.6% | - |
gpipe-efficient-training-of-giant-neural | 94.6% | - |
attention-convolutional-binary-neural-tree | 94.6% | - |
domain-adaptive-transfer-learning-with | 96.2% | - |
fixing-the-train-test-resolution-discrepancy | 94.4% | - |
contrastively-reinforced-attention | 94.8% | - |
towards-faster-training-of-global-covariance | 93.3% | - |
fine-grained-visual-classification-with | 95.6% | - |
learn-from-each-other-to-classify-better | 97.1% | - |
densenets-reloaded-paradigm-shift-beyond | 94.1% | 87M |
neural-architecture-transfer | 90.9% | 2.4M |
multi-attention-multi-class-constraint-for | 93.0% | - |
interweaving-insights-high-order-feature | 96.92% | - |
neural-architecture-transfer | 92.6% | 3.5M |
cross-x-learning-for-fine-grained-visual | 94.6% | - |
compounding-the-performance-improvements-of | 94.4% | - |
sharpness-aware-minimization-for-efficiently-1 | 95.96% | - |
learning-a-discriminative-filter-bank-within | 93.8% | - |
pairwise-confusion-for-fine-grained-visual | 92.86% | - |
learning-attentive-pairwise-interaction-for | 95.3% | - |
classification-specific-parts-for-improving | 92.5% | - |
vision-models-are-more-robust-and-fair-when | 68.03% | - |
classification-specific-parts-for-improving | 92.5% | - |
attribute-mix-semantic-data-augmentation-for | 94.9% | - |
a-free-lunch-from-vit-adaptive-attention | 95.0% | - |
learning-multi-attention-convolutional-neural | 92.8 | - |
the-devil-is-in-the-channels-mutual-channel | 94.4% | - |
bamboo-building-mega-scale-vision-dataset | 93.9% | - |
graph-propagation-based-correlation-learning | 94.0% | - |
fine-grained-recognition-accounting-for | 94.9% | - |
penalizing-the-hard-example-but-not-too-much | 94.2% | - |
elope-fine-grained-visual-classification-with | 95.0% | - |
a-simple-episodic-linear-probe-improves | 94.2 | - |
counterfactual-attention-learning-for-fine | 95.5% | - |
resmlp-feedforward-networks-for-image | 89.5% | - |
scaling-up-visual-and-vision-language | 96.13% | - |
ml-decoder-scalable-and-versatile | 96.41% | - |
part-guided-relational-transformers-for-fine | 95.3% | - |
fine-grained-visual-classification-with-batch | 94.8% | - |