Image Classification On Cifar 100
المقاييس
Percentage correct
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | Percentage correct |
---|---|
stochastic-pooling-for-regularization-of-deep | 57.5 |
non-convex-learning-via-replica-exchange | 84.38 |
on-the-importance-of-normalisation-layers-in | 70.8 |
three-things-everyone-should-know-about | 93.0 |
deep-competitive-pathway-networks | 81.10 |
autoaugment-learning-augmentation-policies | 89.3 |
dianet-dense-and-implicit-attention-network | 76.98 |
النموذج 8 | 85.25 |
differentiable-spike-rethinking-gradient | 74.24 |
res2net-a-new-multi-scale-backbone | 83.44 |
expeditious-saliency-guided-mix-up-through | 83.97 |
pre-training-of-lightweight-vision | 78.27 |
expeditious-saliency-guided-mix-up-through | 80.75 |
pdo-econvs-partial-differential-operator | 73 |
cnn-filter-db-an-empirical-investigation-of | 75.59 |
large-scale-learning-of-general-visual | 92.17 |
learning-identity-mappings-with-residual | 81.73 |
sharpness-aware-minimization-for-efficiently-1 | 89.7 |
resmlp-feedforward-networks-for-image | 89.5 |
andhra-bandersnatch-training-neural-networks | 78.792 |
unsharp-masking-layer-injecting-prior | 60.36 |
sparseswin-swin-transformer-with-sparse | 85.35 |
gated-convolutional-networks-with-hybrid | 81.87 |
sagemix-saliency-guided-mixup-for-point | 80.16 |
sharpness-aware-quantization-for-deep-neural | 35.05 |
andhra-bandersnatch-training-neural-networks | 73.930 |
stochastic-optimization-of-plain | 72.96 |
incorporating-convolution-designs-into-visual | 89.4 |
sharpness-aware-minimization-for-efficiently-1 | 42.64 |
2003-13549 | 77.7 |
efficient-adaptive-ensembling-for-image | - |
when-vision-transformers-outperform-resnets | 86.4 |
andhra-bandersnatch-training-neural-networks | 82.784 |
deep-networks-with-stochastic-depth | 75.42 |
andhra-bandersnatch-training-neural-networks | 80.354 |
not-all-images-are-worth-16x16-words-dynamic | 89.63 |
expeditious-saliency-guided-mix-up-through | 85 |
wide-residual-networks | 81.15 |
reduction-of-class-activation-uncertainty | 93.31 |
an-evolutionary-approach-to-dynamic | 94.95 |
striving-for-simplicity-the-all-convolutional | 66.3 |
astroformer-more-data-might-not-be-all-you | 93.36 |
sharpness-aware-minimization-for-efficiently-1 | 96.08 |
bnn-bn-training-binary-neural-networks | 68.34 |
mixmo-mixing-multiple-inputs-for-multiple | 86.81 |
self-knowledge-distillation-a-simple-way-for | 86.41 |
revisiting-a-knn-based-image-classification | 81.7 |
resnet50-on-cifar-100-without-transfer | 67.060 |
learning-the-connections-in-direct-feedback | 66.78 |
cvt-introducing-convolutions-to-vision | 94.09 |
tresnet-high-performance-gpu-dedicated | 92.6 |
automatic-data-augmentation-via-invariance | 84.89 |
incorporating-convolution-designs-into-visual | 91.8 |
grafit-learning-fine-grained-image | 83.7 |
upanets-learning-from-the-universal-pixel | 80.29 |
network-in-network | 64.3 |
training-data-efficient-image-transformers | 90.8 |
expeditious-saliency-guided-mix-up-through | 82.16 |
hd-cnn-hierarchical-deep-convolutional-neural | 67.4 |
beta-rank-a-robust-convolutional-filter | 74.01 |
splitnet-divide-and-co-training | 87.44 |
stochastic-subsampling-with-average-pooling | 72.537 |
aggregating-nested-transformers | 82.56 |
when-vision-transformers-outperform-resnets | 89.1 |
single-bit-per-weight-deep-convolutional | 82.95 |
towards-principled-design-of-deep | 80.29 |
fatnet-high-resolution-kernels-for | 60 |
an-algorithm-for-routing-vectors-in-sequences | 93.8 |
all-you-need-is-a-good-init | 72.3 |
gated-convolutional-networks-with-hybrid | 84.04 |
fatnet-high-resolution-kernels-for | 60 |
with-a-little-help-from-my-friends-nearest | 79 |
splitnet-divide-and-co-training | 85.74 |
deep-feature-response-discriminative | 86.31 |
discriminative-transfer-learning-with-tree | 63.2 |
mixmo-mixing-multiple-inputs-for-multiple | 85.77 |
online-training-through-time-for-spiking | 71.05 |
wavemix-lite-a-resource-efficient-neural | 70.20 |
pdo-econvs-partial-differential-operator | 79.99 |
tokenmixup-efficient-attention-guided-token | 83.57 |
learning-the-connections-in-direct-feedback | 48.03 |
pdo-econvs-partial-differential-operator | 81.6 |
global-filter-networks-for-image | 90.3 |
gpipe-efficient-training-of-giant-neural | 91.3 |
maxout-networks | 61.43 |
generalizing-pooling-functions-in | 67.6 |
averaging-weights-leads-to-wider-optima-and | 82.15 |
النموذج 88 | 85.59 |
identity-mappings-in-deep-residual-networks | 77.3 |
incorporating-convolution-designs-into-visual | 88 |
efficientnetv2-smaller-models-and-faster | 92.3 |
non-convex-learning-via-replica-exchange | 82.95 |
improving-deep-neural-networks-with | 61.9 |
automix-unveiling-the-power-of-mixup | 85.16 |
resmlp-feedforward-networks-for-image | 87.0 |
mish-a-self-regularized-non-monotonic-neural | 74.41 |
resnet-strikes-back-an-improved-training | 86.9 |
automix-unveiling-the-power-of-mixup | 83.64 |
label-ranker-self-aware-preference-for | - |
densely-connected-convolutional-networks | 82.62 |
expeditious-saliency-guided-mix-up-through | 82.43 |
splitnet-divide-and-co-training | 89.46 |
convmlp-hierarchical-convolutional-mlps-for | 89.1 |
large-scale-learning-of-general-visual | 93.51 |
fast-and-accurate-deep-network-learning-by | 75.7 |
oriented-response-networks | 83.85 |
efficientnetv2-smaller-models-and-faster | 92.2 |
non-convex-learning-via-replica-exchange | 74.14 |
squeeze-and-excitation-networks | 84.59 |
im-loss-information-maximization-loss-for | 70.18 |
bamboo-building-mega-scale-vision-dataset | 90.2 |
escaping-the-big-data-paradigm-with-compact | 82.72 |
ml-decoder-scalable-and-versatile | 95.1 |
neural-architecture-transfer | 86.0 |
understanding-and-enhancing-mixed-sample-data | 83.95 |
vision-models-are-more-robust-and-fair-when | 81.53 |
deep-residual-networks-with-exponential | 73.5 |
expeditious-saliency-guided-mix-up-through | 81.79 |
learning-implicitly-recurrent-cnns-through | 82.57 |
fatnet-high-resolution-kernels-for | 66 |
rethinking-recurrent-neural-networks-and | 90.27 |
colornet-investigating-the-importance-of | 88.4 |
towards-class-specific-unit | 76.64 |
densely-connected-convolutional-networks | 82.82 |
fast-autoaugment | 88.3 |
going-deeper-with-image-transformers | 93.1 |
puzzle-mix-exploiting-saliency-and-local-1 | 84.05 |
expeditious-saliency-guided-mix-up-through | 82.3 |
incorporating-convolution-designs-into-visual | 91.8 |
النموذج 130 | 85.38 |
scalable-bayesian-optimization-using-deep | 72.6 |
automatic-data-augmentation-via-invariance | 81.19 |
convmlp-hierarchical-convolutional-mlps-for | 87.4 |
competitive-multi-scale-convolution | 72.4 |
momentum-residual-neural-networks | 76.38 |
pso-convolutional-neural-networks-with | 87.48 |
economical-ensembles-with-hypernetworks | 85.00 |
attend-and-rectify-a-gated-attention | 82.18 |
improving-neural-architecture-search-image | 85.42 |
stacked-what-where-auto-encoders | 69.1 |
non-convex-learning-via-replica-exchange | 76.55 |
convmlp-hierarchical-convolutional-mlps-for | 88.6 |
learning-the-connections-in-direct-feedback | 19.49 |
expeditious-saliency-guided-mix-up-through | 84.9 |
imagenet-21k-pretraining-for-the-masses | 94.2 |
empirical-evaluation-of-rectified-activations | 59.8 |
exact-how-to-train-your-accuracy | 82.68 |
how-important-is-weight-symmetry-in | 48.75 |
training-very-deep-networks | 67.8 |
selective-kernel-networks | 82.67 |
non-convex-learning-via-replica-exchange | 80.14 |
expeditious-saliency-guided-mix-up-through | 83.02 |
economical-ensembles-with-hypernetworks | 83.06 |
convolutional-xformers-for-vision | 60.11 |
lets-keep-it-simple-using-simple | 78.37 |
boosting-discriminative-visual-representation | 85.50 |
eeea-net-an-early-exit-evolutionary-neural | 84.98 |
polynomial-networks-in-deep-classifiers | 77.9 |
universum-prescription-regularization-using | 67.2 |
mixup-beyond-empirical-risk-minimization | 83.20 |
when-vision-transformers-outperform-resnets | 85.2 |
manifold-mixup-better-representations-by | 81.96 |
efficientnet-rethinking-model-scaling-for | 91.7 |
cutmix-regularization-strategy-to-train | 86.19 |
expeditious-saliency-guided-mix-up-through | 82.32 |
mixmatch-a-holistic-approach-to-semi | 74.1 |
transformer-in-transformer | 91.1 |
neural-architecture-transfer | 87.7 |
encoding-the-latent-posterior-of-bayesian | 76.85 |
muxconv-information-multiplexing-in | 86.1 |
update-in-unit-gradient | 93.95 |
andhra-bandersnatch-training-neural-networks | 80.830 |
averaging-weights-leads-to-wider-optima-and | 84.16 |
training-neural-networks-with-local-error | 79.9 |
deep-convolutional-decision-jungle-for-image | 69 |
regularizing-neural-networks-via-adversarial | 86.64 |
when-vision-transformers-outperform-resnets | 87.6 |
how-to-use-dropout-correctly-on-residual | 73.98 |
neural-architecture-transfer | 88.3 |
when-vision-transformers-outperform-resnets | 82.4 |
performance-of-gaussian-mixture-model | - |
batch-normalized-maxout-network-in-network | 71.1 |
enaet-self-trained-ensemble-autoencoding | 83.13 |
effect-of-large-scale-pre-training-on-full | 88.54 |
spectral-representations-for-convolutional | 68.4 |
sharpness-aware-minimization-for-efficiently-1 | 36.07 |
fractional-max-pooling | 73.6 |
asam-adaptive-sharpness-aware-minimization | 89.90 |
190409925 | 81.6 |
dlme-deep-local-flatness-manifold-embedding | 66.1 |
learning-activation-functions-to-improve-deep | 69.2 |
spatially-sparse-convolutional-neural | 75.7 |
wavemix-lite-a-resource-efficient-neural | 85.09 |
boosting-discriminative-visual-representation | 84.42 |
large-scale-evolution-of-image-classifiers | 77 |
regularizing-neural-networks-via-adversarial | 78.49 |
deep-convolutional-neural-networks-as-generic | 67.7 |
splitnet-divide-and-co-training | 86.90 |
escaping-the-big-data-paradigm-with-compact | 77.31 |
efficientnetv2-smaller-models-and-faster | 91.5 |
gated-convolutional-networks-with-hybrid | 83.46 |
neural-architecture-transfer | 87.5 |
on-the-performance-analysis-of-momentum | 81.44 |
expeditious-saliency-guided-mix-up-through | 81.49 |
deeply-supervised-nets | 65.4 |
gated-attention-coding-for-training-high | 80.45 |
pdo-econvs-partial-differential-operator | 72.87 |
grouped-pointwise-convolutions-reduce | 71.36 |
contextual-classification-using-self | 83.2 |
expeditious-saliency-guided-mix-up-through | 80.6 |