Image Classification On Cifar 10
المقاييس
Percentage correct
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | Percentage correct |
---|---|
andhra-bandersnatch-training-neural-networks | 95.900 |
beta-rank-a-robust-convolutional-filter | 93.97 |
imagenet-classification-with-deep | 89 |
reduction-of-class-activation-uncertainty | 99.05 |
vision-xformers-efficient-attention-for-image | 79.50 |
empirical-evaluation-of-rectified-activations | 88.8 |
trainable-activations-for-image | 90.5 |
deep-residual-networks-with-exponential | 94.4 |
incorporating-convolution-designs-into-visual | 99.1 |
update-in-unit-gradient | 99.13 |
when-vision-transformers-outperform-resnets | 97.4 |
large-scale-evolution-of-image-classifiers | 95.6 |
momentum-residual-neural-networks | 95.18 |
unsupervised-learning-using-pretrained-cnn | 83.1 |
an-evolutionary-approach-to-dynamic | 99.49 |
multi-column-deep-neural-networks-for-image | 88.8 |
spectral-representations-for-convolutional | 91.4 |
autodropout-learning-dropout-patterns-to | 97.9 |
ondev-lct-on-device-lightweight-convolutional | 86.04 |
astroformer-more-data-might-not-be-all-you | 99.12 |
renet-a-recurrent-neural-network-based | 87.7 |
grouped-pointwise-convolutions-reduce | 90.83 |
densenets-reloaded-paradigm-shift-beyond | 99.31 |
regularizing-neural-networks-via-adversarial | 98.02 |
discriminative-unsupervised-feature-learning-1 | 82 |
scalable-bayesian-optimization-using-deep | 93.6 |
dynamic-routing-between-capsules | 89.4 |
maxout-networks | 90.65 |
splitnet-divide-and-co-training | 98.38 |
apac-augmented-pattern-classification-with | 89.7 |
transformer-in-transformer | 99.1 |
convmlp-hierarchical-convolutional-mlps-for | 98.6 |
non-convex-learning-via-replica-exchange | 97.42 |
network-in-network | 91.2 |
global-filter-networks-for-image | 99.0 |
your-diffusion-model-is-secretly-a-zero-shot | 88.5 |
regularizing-neural-networks-via-adversarial | 96.03 |
when-vision-transformers-outperform-resnets | 96.1 |
an-analysis-of-unsupervised-pre-training-in | 86.7 |
competitive-multi-scale-convolution | 93.1 |
noisy-differentiable-architecture-search | 97.61 |
pso-convolutional-neural-networks-with | 98.31 |
lets-keep-it-simple-using-simple | 95.51 |
sparseswin-swin-transformer-with-sparse | 97.43 |
practical-bayesian-optimization-of-machine | 90.5 |
performance-of-gaussian-mixture-model | - |
large-scale-evolution-of-image-classifiers | 94.6 |
adaptive-split-fusion-transformer | 98.7 |
automatic-data-augmentation-via-invariance | 97.85 |
pdo-econvs-partial-differential-operator | 94.35 |
all-you-need-is-a-good-init | 94.2 |
an-optimized-toolbox-for-advanced-image | 82.8 |
vision-models-are-more-robust-and-fair-when | 90 |
training-very-deep-networks | 92.4 |
levit-a-vision-transformer-in-convnet-s | 97.5 |
convolutional-kernel-networks | 82.2 |
trainable-activations-for-image | 88.8 |
when-vision-transformers-outperform-resnets | 98.2 |
how-to-use-dropout-correctly-on-residual | 94.4367 |
mish-a-self-regularized-non-monotonic-neural | 94.05 |
levit-a-vision-transformer-in-convnet-s | 97.6 |
automatic-data-augmentation-via-invariance | 97.05 |
non-convex-learning-via-replica-exchange | 95.35 |
transboost-improving-the-best-imagenet | 97.61 |
dinov2-learning-robust-visual-features | 99.5 |
averaging-weights-leads-to-wider-optima-and | 97.12 |
sparse-networks-from-scratch-faster-training | 95.04 |
when-vision-transformers-outperform-resnets | 98.6 |
fast-denser-evolving-fully-trained-deep | 88.73 |
cutmix-regularization-strategy-to-train | 97.12 |
andhra-bandersnatch-training-neural-networks | 95.536 |
learning-with-recursive-perceptual | 79.7 |
trainable-activations-for-image | 86.5 |
deep-pyramidal-residual-networks | 96.69 |
pcanet-a-simple-deep-learning-baseline-for | 78.7 |
encoding-the-latent-posterior-of-bayesian | 95.02 |
when-vision-transformers-outperform-resnets | 98.2 |
batch-normalized-maxout-network-in-network | 93.3 |
fractional-max-pooling | 96.5 |
an-image-is-worth-16x16-words-transformers-1 | 99.5 |
economical-ensembles-with-hypernetworks | 96.81 |
densely-connected-convolutional-networks | 96.54 |
on-the-performance-analysis-of-momentum | 95.66 |
proxylessnas-direct-neural-architecture | 97.92 |
understanding-and-enhancing-mixed-sample-data | 98.64 |
neural-architecture-search-with-reinforcement | 96.4 |
unsupervised-representation-learning-with-1 | 82.8 |
personalized-federated-learning-with-hidden | 80.63 |
dlme-deep-local-flatness-manifold-embedding | 91.3 |
vision-xformers-efficient-attention-for-image | 75.26 |
preventing-manifold-intrusion-with-locality | 95.97 |
efficientnet-rethinking-model-scaling-for | 98.9 |
im-loss-information-maximization-loss-for | 95.49 |
noisy-differentiable-architecture-search | 98.28 |
distilled-gradual-pruning-with-pruned-fine | 92.90 |
learning-implicitly-recurrent-cnns-through | 97.47 |
when-vision-transformers-outperform-resnets | 97.8 |
context-aware-deep-model-compression-for-edge | 92.01 |
efficient-adaptive-ensembling-for-image | - |
stochastic-optimization-of-plain | 94.29 |
single-bit-per-weight-deep-convolutional | 96.71 |
striving-for-simplicity-the-all-convolutional | 95.6 |
ondev-lct-on-device-lightweight-convolutional | 87.65 |
identity-mappings-in-deep-residual-networks | 95.4 |
efficientnetv2-smaller-models-and-faster | 99.0 |
mixup-beyond-empirical-risk-minimization | 97.3 |
gradinit-learning-to-initialize-neural | 94.71 |
vision-xformers-efficient-attention-for-image | 83.36 |
averaging-weights-leads-to-wider-optima-and | 96.79 |
on-the-importance-of-normalisation-layers-in | 91.5 |
convmlp-hierarchical-convolutional-mlps-for | 98 |
trainable-activations-for-image | 90.9 |
mish-a-self-regularized-non-monotonic-neural | 92.02 |
wavemix-multi-resolution-token-mixing-for | 85.21 |
revisiting-a-knn-based-image-classification | 97.3 |
online-training-through-time-for-spiking | 93.73 |
universum-prescription-regularization-using | 93.3 |
levit-a-vision-transformer-in-convnet-s | 98 |
bamboo-building-mega-scale-vision-dataset | 98.2 |
deep-networks-with-internal-selective | 90.8 |
large-scale-learning-of-general-visual | 99.37 |
pdo-econvs-partial-differential-operator | 94.62 |
trainable-activations-for-image | 89.0 |
learning-hyperparameters-via-a-data | 98.2 |
grouped-pointwise-convolutions-reduce | 93.75 |
stacked-what-where-auto-encoders | 92.2 |
with-a-little-help-from-my-friends-nearest | 93.7 |
automix-unveiling-the-power-of-mixup | 97.84 |
pdo-econvs-partial-differential-operator | 96.32 |
knowledge-representing-efficient-sparse | 90.65 |
an-enhanced-scheme-for-reducing-the | 94.95 |
batchboost-regularization-for-stabilizing | 97.54 |
vision-xformers-efficient-attention-for-image | 83.26 |
training-data-efficient-image-transformers | 99.1 |
non-convex-learning-via-replica-exchange | 96.87 |
cnn-filter-db-an-empirical-investigation-of | 94.79 |
non-convex-learning-via-replica-exchange | 96.12 |
النموذج 138 | 95.32 |
vision-xformers-efficient-attention-for-image | 74 |
srm-a-style-based-recalibration-module-for | 95.05 |
neural-architecture-transfer | 98.2 |
vision-xformers-efficient-attention-for-image | 65.06 |
efficientnetv2-smaller-models-and-faster | 98.7 |
gated-convolutional-networks-with-hybrid | 97.86 |
rmdl-random-multimodel-deep-learning-for | 91.21 |
training-neural-networks-with-local-error | 96.4 |
threshold-pruning-tool-for-densely-connected | 86.34 |
upanets-learning-from-the-universal-pixel | 96.47 |
pre-training-of-lightweight-vision | 96.41 |
how-important-is-weight-symmetry-in | 80.98 |
towards-principled-design-of-deep | 96.29 |
threshnet-an-efficient-densenet-using | 86.69 |
wavemix-lite-a-resource-efficient-neural | 97.29 |
benchopt-reproducible-efficient-and | 95.55 |
augmented-neural-odes | 60.6 |
a-bregman-learning-framework-for-sparse | 92.3 |
squeeze-and-excitation-networks | 97.88 |
incorporating-convolution-designs-into-visual | 99 |
spatially-sparse-convolutional-neural | 93.7 |
splitnet-divide-and-co-training | 98.32 |
improving-deep-neural-networks-with | 90.6 |
sag-vit-a-scale-aware-high-fidelity-patching | - |
andhra-bandersnatch-training-neural-networks | 96.088 |
patches-are-all-you-need-1 | 96.74 |
selective-kernel-networks | 96.53 |
gated-convolutional-networks-with-hybrid | 96.85 |
convolutional-xformers-for-vision | 94.46 |
gated-convolutional-networks-with-hybrid | 97.71 |
three-things-everyone-should-know-about | 99.3 |
improving-neural-networks-by-preventing-co | 84.4 |
mixmatch-a-holistic-approach-to-semi | 95.05 |
neural-architecture-transfer | 98.4 |
levit-a-vision-transformer-in-convnet-s | 98.2 |
andhra-bandersnatch-training-neural-networks | 94.118 |
trainable-activations-for-image | 91.1 |
rethinking-recurrent-neural-networks-and | 98.52 |
enaet-self-trained-ensemble-autoencoding | 98.01 |
deep-convolutional-neural-networks-as-generic | 89.1 |
binaryconnect-training-deep-neural-networks | 91.7 |
gated-attention-coding-for-training-high | 96.46 |
densenets-reloaded-paradigm-shift-beyond | 98.88 |
neural-architecture-transfer | 97.9 |
deep-complex-networks | 94.4 |
non-convex-learning-via-replica-exchange | 94.62 |
deep-competitive-pathway-networks | 96.62 |
grouped-pointwise-convolutions-reduce | 92.74 |
trainable-activations-for-image | 90.4 |
densenets-reloaded-paradigm-shift-beyond | 99.31 |
splitnet-divide-and-co-training | 98.31 |
learning-local-discrete-features-in | 94.15 |
ondev-lct-on-device-lightweight-convolutional | 87.03 |
fast-autoaugment | 98.3 |
an-algorithm-for-routing-vectors-in-sequences | 99.2 |
xnodr-and-xnidr-two-accurate-and-fast-fully | 96.87 |
levit-a-vision-transformer-in-convnet-s | 98.1 |
deeply-supervised-nets | 91.8 |
fixup-initialization-residual-learning | 97.7 |
enhanced-image-classification-with-a-fast | 75.9 |
autodropout-learning-dropout-patterns-to | 96.8 |
mixmo-mixing-multiple-inputs-for-multiple | 97.73 |
evaluating-the-performance-of-taaf-for-image | 82.06 |
learning-activation-functions-to-improve-deep | 92.5 |
sneaky-spikes-uncovering-stealthy-backdoor | 68.3 |
flexconv-continuous-kernel-convolutions-with-1 | 92.2 |
النموذج 205 | 78.9 |
learning-identity-mappings-with-residual | 96.35 |
manifold-mixup-better-representations-by | 97.45 |
grouped-pointwise-convolutions-reduce | 89.81 |
aggregating-nested-transformers | 97.2 |
efficientnetv2-smaller-models-and-faster | 99.1 |
cvt-introducing-convolutions-to-vision | 99.39 |
economical-ensembles-with-hypernetworks | 97.45 |
ondev-lct-on-device-lightweight-convolutional | 85.73 |
deep-polynomial-neural-networks | 94.9 |
deep-networks-with-stochastic-depth | 94.77 |
connection-reduction-is-all-you-need | 86.64 |
on-the-relationship-between-self-attention-1 | 93.8 |
neural-architecture-transfer | 97.4 |
adaptive-split-fusion-transformer | 98.8% |
sample-efficient-neural-architecture-search-1 | 99.03 |
vision-xformers-efficient-attention-for-image | 76.9 |
tokenmixup-efficient-attention-guided-token | 97.78 |
exact-how-to-train-your-accuracy | 96.73 |
learning-in-wilson-cowan-model-for | 86.59 |
human-interpretable-ai-enhancing-tsetlin | 75.1 |
not-all-images-are-worth-16x16-words-dynamic | 98.53 |
effect-of-large-scale-pre-training-on-full | 97.82 |
stochastic-pooling-for-regularization-of-deep | 84.9 |
tresnet-high-performance-gpu-dedicated | 99 |
incorporating-convolution-designs-into-visual | 98.5 |
ondev-lct-on-device-lightweight-convolutional | 84.55 |
aggregated-pyramid-vision-transformer-split | 80.45 |
large-scale-learning-of-general-visual | 98.91 |
gpipe-efficient-training-of-giant-neural | 99 |
an-image-is-worth-16x16-words-transformers-1 | 99.42 |
going-deeper-with-image-transformers | 99.4 |
ondev-lct-on-device-lightweight-convolutional | 86.27 |
splitnet-divide-and-co-training | 98.71 |
ondev-lct-on-device-lightweight-convolutional | 86.64 |
pdo-econvs-partial-differential-operator | 96.5 |
towards-class-specific-unit | 95.33 |
asam-adaptive-sharpness-aware-minimization | 98.68 |
smoothnets-optimizing-cnn-architecture-design | 73.5 |
generalizing-pooling-functions-in | 94.0 |
triplenet-a-low-computing-power-platform-of | 87.03 |
unsupervised-representation-learning-with-1 | 80.6 |
stochastic-subsampling-with-average-pooling | 93.861 |
autoformer-searching-transformers-for-visual | 99.1 |
effect-of-large-scale-pre-training-on-full | 95.78 |
muxconv-information-multiplexing-in | 98.0 |
resnet-strikes-back-an-improved-training | 85.28 |
oriented-response-networks | 97.02 |
loss-sensitive-generative-adversarial | 91.7 |
fast-and-accurate-deep-network-learning-by | 93.5 |
andhra-bandersnatch-training-neural-networks | 96.378 |
escaping-the-big-data-paradigm-with-compact | 95.29 |
resnet-strikes-back-an-improved-training | 98.3 |
escaping-the-big-data-paradigm-with-compact | 98 |
convmlp-hierarchical-convolutional-mlps-for | 98.6 |
context-aware-compilation-of-dnn-training | 95.16 |
efficient-architecture-search-by-network | 94.6 |
bnn-bn-training-binary-neural-networks | 92.08 |
ondev-lct-on-device-lightweight-convolutional | 86.61 |
patches-are-all-you-need-1 | 96.03 |