HyperAI超神经

Image Classification On Cifar 10

评估指标

Percentage correct

评测结果

各个模型在此基准测试上的表现结果

比较表格
模型名称Percentage correct
andhra-bandersnatch-training-neural-networks95.900
beta-rank-a-robust-convolutional-filter93.97
imagenet-classification-with-deep89
reduction-of-class-activation-uncertainty99.05
vision-xformers-efficient-attention-for-image79.50
empirical-evaluation-of-rectified-activations88.8
trainable-activations-for-image90.5
deep-residual-networks-with-exponential94.4
incorporating-convolution-designs-into-visual99.1
update-in-unit-gradient99.13
when-vision-transformers-outperform-resnets97.4
large-scale-evolution-of-image-classifiers95.6
momentum-residual-neural-networks95.18
unsupervised-learning-using-pretrained-cnn83.1
an-evolutionary-approach-to-dynamic99.49
multi-column-deep-neural-networks-for-image88.8
spectral-representations-for-convolutional91.4
autodropout-learning-dropout-patterns-to97.9
ondev-lct-on-device-lightweight-convolutional86.04
astroformer-more-data-might-not-be-all-you99.12
renet-a-recurrent-neural-network-based87.7
grouped-pointwise-convolutions-reduce90.83
densenets-reloaded-paradigm-shift-beyond99.31
regularizing-neural-networks-via-adversarial98.02
discriminative-unsupervised-feature-learning-182
scalable-bayesian-optimization-using-deep93.6
dynamic-routing-between-capsules89.4
maxout-networks90.65
splitnet-divide-and-co-training98.38
apac-augmented-pattern-classification-with89.7
transformer-in-transformer99.1
convmlp-hierarchical-convolutional-mlps-for98.6
non-convex-learning-via-replica-exchange97.42
network-in-network91.2
global-filter-networks-for-image99.0
your-diffusion-model-is-secretly-a-zero-shot88.5
regularizing-neural-networks-via-adversarial96.03
when-vision-transformers-outperform-resnets96.1
an-analysis-of-unsupervised-pre-training-in86.7
competitive-multi-scale-convolution93.1
noisy-differentiable-architecture-search97.61
pso-convolutional-neural-networks-with98.31
lets-keep-it-simple-using-simple95.51
sparseswin-swin-transformer-with-sparse97.43
practical-bayesian-optimization-of-machine90.5
performance-of-gaussian-mixture-model-
large-scale-evolution-of-image-classifiers94.6
adaptive-split-fusion-transformer98.7
automatic-data-augmentation-via-invariance97.85
pdo-econvs-partial-differential-operator94.35
all-you-need-is-a-good-init94.2
an-optimized-toolbox-for-advanced-image82.8
vision-models-are-more-robust-and-fair-when90
training-very-deep-networks92.4
levit-a-vision-transformer-in-convnet-s97.5
convolutional-kernel-networks82.2
trainable-activations-for-image88.8
when-vision-transformers-outperform-resnets98.2
how-to-use-dropout-correctly-on-residual94.4367
mish-a-self-regularized-non-monotonic-neural94.05
levit-a-vision-transformer-in-convnet-s97.6
automatic-data-augmentation-via-invariance97.05
non-convex-learning-via-replica-exchange95.35
transboost-improving-the-best-imagenet97.61
dinov2-learning-robust-visual-features99.5
averaging-weights-leads-to-wider-optima-and97.12
sparse-networks-from-scratch-faster-training95.04
when-vision-transformers-outperform-resnets98.6
fast-denser-evolving-fully-trained-deep88.73
cutmix-regularization-strategy-to-train97.12
andhra-bandersnatch-training-neural-networks95.536
learning-with-recursive-perceptual79.7
trainable-activations-for-image86.5
deep-pyramidal-residual-networks96.69
pcanet-a-simple-deep-learning-baseline-for78.7
encoding-the-latent-posterior-of-bayesian95.02
when-vision-transformers-outperform-resnets98.2
batch-normalized-maxout-network-in-network93.3
fractional-max-pooling96.5
an-image-is-worth-16x16-words-transformers-199.5
economical-ensembles-with-hypernetworks96.81
densely-connected-convolutional-networks96.54
on-the-performance-analysis-of-momentum95.66
proxylessnas-direct-neural-architecture97.92
understanding-and-enhancing-mixed-sample-data98.64
neural-architecture-search-with-reinforcement96.4
unsupervised-representation-learning-with-182.8
personalized-federated-learning-with-hidden80.63
dlme-deep-local-flatness-manifold-embedding91.3
vision-xformers-efficient-attention-for-image75.26
preventing-manifold-intrusion-with-locality95.97
efficientnet-rethinking-model-scaling-for98.9
im-loss-information-maximization-loss-for95.49
noisy-differentiable-architecture-search98.28
distilled-gradual-pruning-with-pruned-fine92.90
learning-implicitly-recurrent-cnns-through97.47
when-vision-transformers-outperform-resnets97.8
context-aware-deep-model-compression-for-edge92.01
efficient-adaptive-ensembling-for-image-
stochastic-optimization-of-plain94.29
single-bit-per-weight-deep-convolutional96.71
striving-for-simplicity-the-all-convolutional95.6
ondev-lct-on-device-lightweight-convolutional87.65
identity-mappings-in-deep-residual-networks95.4
efficientnetv2-smaller-models-and-faster99.0
mixup-beyond-empirical-risk-minimization97.3
gradinit-learning-to-initialize-neural94.71
vision-xformers-efficient-attention-for-image83.36
averaging-weights-leads-to-wider-optima-and96.79
on-the-importance-of-normalisation-layers-in91.5
convmlp-hierarchical-convolutional-mlps-for98
trainable-activations-for-image90.9
mish-a-self-regularized-non-monotonic-neural92.02
wavemix-multi-resolution-token-mixing-for85.21
revisiting-a-knn-based-image-classification97.3
online-training-through-time-for-spiking93.73
universum-prescription-regularization-using93.3
levit-a-vision-transformer-in-convnet-s98
bamboo-building-mega-scale-vision-dataset98.2
deep-networks-with-internal-selective90.8
large-scale-learning-of-general-visual99.37
pdo-econvs-partial-differential-operator94.62
trainable-activations-for-image89.0
learning-hyperparameters-via-a-data98.2
grouped-pointwise-convolutions-reduce93.75
stacked-what-where-auto-encoders92.2
with-a-little-help-from-my-friends-nearest93.7
automix-unveiling-the-power-of-mixup97.84
pdo-econvs-partial-differential-operator96.32
knowledge-representing-efficient-sparse90.65
an-enhanced-scheme-for-reducing-the94.95
batchboost-regularization-for-stabilizing97.54
vision-xformers-efficient-attention-for-image83.26
training-data-efficient-image-transformers99.1
non-convex-learning-via-replica-exchange96.87
cnn-filter-db-an-empirical-investigation-of94.79
non-convex-learning-via-replica-exchange96.12
模型 13895.32
vision-xformers-efficient-attention-for-image74
srm-a-style-based-recalibration-module-for95.05
neural-architecture-transfer98.2
vision-xformers-efficient-attention-for-image65.06
efficientnetv2-smaller-models-and-faster98.7
gated-convolutional-networks-with-hybrid97.86
rmdl-random-multimodel-deep-learning-for91.21
training-neural-networks-with-local-error96.4
threshold-pruning-tool-for-densely-connected86.34
upanets-learning-from-the-universal-pixel96.47
pre-training-of-lightweight-vision96.41
how-important-is-weight-symmetry-in80.98
towards-principled-design-of-deep96.29
threshnet-an-efficient-densenet-using86.69
wavemix-lite-a-resource-efficient-neural97.29
benchopt-reproducible-efficient-and95.55
augmented-neural-odes60.6
a-bregman-learning-framework-for-sparse92.3
squeeze-and-excitation-networks97.88
incorporating-convolution-designs-into-visual99
spatially-sparse-convolutional-neural93.7
splitnet-divide-and-co-training98.32
improving-deep-neural-networks-with90.6
sag-vit-a-scale-aware-high-fidelity-patching-
andhra-bandersnatch-training-neural-networks96.088
patches-are-all-you-need-196.74
selective-kernel-networks96.53
gated-convolutional-networks-with-hybrid96.85
convolutional-xformers-for-vision94.46
gated-convolutional-networks-with-hybrid97.71
three-things-everyone-should-know-about99.3
improving-neural-networks-by-preventing-co84.4
mixmatch-a-holistic-approach-to-semi95.05
neural-architecture-transfer98.4
levit-a-vision-transformer-in-convnet-s98.2
andhra-bandersnatch-training-neural-networks94.118
trainable-activations-for-image91.1
rethinking-recurrent-neural-networks-and98.52
enaet-self-trained-ensemble-autoencoding98.01
deep-convolutional-neural-networks-as-generic89.1
binaryconnect-training-deep-neural-networks91.7
gated-attention-coding-for-training-high96.46
densenets-reloaded-paradigm-shift-beyond98.88
neural-architecture-transfer97.9
deep-complex-networks94.4
non-convex-learning-via-replica-exchange94.62
deep-competitive-pathway-networks96.62
grouped-pointwise-convolutions-reduce92.74
trainable-activations-for-image90.4
densenets-reloaded-paradigm-shift-beyond99.31
splitnet-divide-and-co-training98.31
learning-local-discrete-features-in94.15
ondev-lct-on-device-lightweight-convolutional87.03
fast-autoaugment98.3
an-algorithm-for-routing-vectors-in-sequences99.2
xnodr-and-xnidr-two-accurate-and-fast-fully96.87
levit-a-vision-transformer-in-convnet-s98.1
deeply-supervised-nets91.8
fixup-initialization-residual-learning97.7
enhanced-image-classification-with-a-fast75.9
autodropout-learning-dropout-patterns-to96.8
mixmo-mixing-multiple-inputs-for-multiple97.73
evaluating-the-performance-of-taaf-for-image82.06
learning-activation-functions-to-improve-deep92.5
sneaky-spikes-uncovering-stealthy-backdoor68.3
flexconv-continuous-kernel-convolutions-with-192.2
模型 20578.9
learning-identity-mappings-with-residual96.35
manifold-mixup-better-representations-by97.45
grouped-pointwise-convolutions-reduce89.81
aggregating-nested-transformers97.2
efficientnetv2-smaller-models-and-faster99.1
cvt-introducing-convolutions-to-vision99.39
economical-ensembles-with-hypernetworks97.45
ondev-lct-on-device-lightweight-convolutional85.73
deep-polynomial-neural-networks94.9
deep-networks-with-stochastic-depth94.77
connection-reduction-is-all-you-need86.64
on-the-relationship-between-self-attention-193.8
neural-architecture-transfer97.4
adaptive-split-fusion-transformer98.8%
sample-efficient-neural-architecture-search-199.03
vision-xformers-efficient-attention-for-image76.9
tokenmixup-efficient-attention-guided-token97.78
exact-how-to-train-your-accuracy96.73
learning-in-wilson-cowan-model-for86.59
human-interpretable-ai-enhancing-tsetlin75.1
not-all-images-are-worth-16x16-words-dynamic98.53
effect-of-large-scale-pre-training-on-full97.82
stochastic-pooling-for-regularization-of-deep84.9
tresnet-high-performance-gpu-dedicated99
incorporating-convolution-designs-into-visual98.5
ondev-lct-on-device-lightweight-convolutional84.55
aggregated-pyramid-vision-transformer-split80.45
large-scale-learning-of-general-visual98.91
gpipe-efficient-training-of-giant-neural99
an-image-is-worth-16x16-words-transformers-199.42
going-deeper-with-image-transformers99.4
ondev-lct-on-device-lightweight-convolutional86.27
splitnet-divide-and-co-training98.71
ondev-lct-on-device-lightweight-convolutional86.64
pdo-econvs-partial-differential-operator96.5
towards-class-specific-unit95.33
asam-adaptive-sharpness-aware-minimization98.68
smoothnets-optimizing-cnn-architecture-design73.5
generalizing-pooling-functions-in94.0
triplenet-a-low-computing-power-platform-of87.03
unsupervised-representation-learning-with-180.6
stochastic-subsampling-with-average-pooling93.861
autoformer-searching-transformers-for-visual99.1
effect-of-large-scale-pre-training-on-full95.78
muxconv-information-multiplexing-in98.0
resnet-strikes-back-an-improved-training85.28
oriented-response-networks97.02
loss-sensitive-generative-adversarial91.7
fast-and-accurate-deep-network-learning-by93.5
andhra-bandersnatch-training-neural-networks96.378
escaping-the-big-data-paradigm-with-compact95.29
resnet-strikes-back-an-improved-training98.3
escaping-the-big-data-paradigm-with-compact98
convmlp-hierarchical-convolutional-mlps-for98.6
context-aware-compilation-of-dnn-training95.16
efficient-architecture-search-by-network94.6
bnn-bn-training-binary-neural-networks92.08
ondev-lct-on-device-lightweight-convolutional86.61
patches-are-all-you-need-196.03