HyperAI

Image Classification On Cifar 100

Metrics

Percentage correct

Results

Performance results of various models on this benchmark

Comparison Table
Model NamePercentage correct
stochastic-pooling-for-regularization-of-deep57.5
non-convex-learning-via-replica-exchange84.38
on-the-importance-of-normalisation-layers-in70.8
three-things-everyone-should-know-about93.0
deep-competitive-pathway-networks81.10
autoaugment-learning-augmentation-policies89.3
dianet-dense-and-implicit-attention-network76.98
Model 885.25
differentiable-spike-rethinking-gradient74.24
res2net-a-new-multi-scale-backbone83.44
expeditious-saliency-guided-mix-up-through83.97
pre-training-of-lightweight-vision78.27
expeditious-saliency-guided-mix-up-through80.75
pdo-econvs-partial-differential-operator73
cnn-filter-db-an-empirical-investigation-of75.59
large-scale-learning-of-general-visual92.17
learning-identity-mappings-with-residual81.73
sharpness-aware-minimization-for-efficiently-189.7
resmlp-feedforward-networks-for-image89.5
andhra-bandersnatch-training-neural-networks78.792
unsharp-masking-layer-injecting-prior60.36
sparseswin-swin-transformer-with-sparse85.35
gated-convolutional-networks-with-hybrid81.87
sagemix-saliency-guided-mixup-for-point80.16
sharpness-aware-quantization-for-deep-neural35.05
andhra-bandersnatch-training-neural-networks73.930
stochastic-optimization-of-plain72.96
incorporating-convolution-designs-into-visual89.4
sharpness-aware-minimization-for-efficiently-142.64
2003-1354977.7
efficient-adaptive-ensembling-for-image-
when-vision-transformers-outperform-resnets86.4
andhra-bandersnatch-training-neural-networks82.784
deep-networks-with-stochastic-depth75.42
andhra-bandersnatch-training-neural-networks80.354
not-all-images-are-worth-16x16-words-dynamic89.63
expeditious-saliency-guided-mix-up-through85
wide-residual-networks81.15
reduction-of-class-activation-uncertainty93.31
an-evolutionary-approach-to-dynamic94.95
striving-for-simplicity-the-all-convolutional66.3
astroformer-more-data-might-not-be-all-you93.36
sharpness-aware-minimization-for-efficiently-196.08
bnn-bn-training-binary-neural-networks68.34
mixmo-mixing-multiple-inputs-for-multiple86.81
self-knowledge-distillation-a-simple-way-for86.41
revisiting-a-knn-based-image-classification81.7
resnet50-on-cifar-100-without-transfer67.060
learning-the-connections-in-direct-feedback66.78
cvt-introducing-convolutions-to-vision94.09
tresnet-high-performance-gpu-dedicated92.6
automatic-data-augmentation-via-invariance84.89
incorporating-convolution-designs-into-visual91.8
grafit-learning-fine-grained-image83.7
upanets-learning-from-the-universal-pixel80.29
network-in-network64.3
training-data-efficient-image-transformers90.8
expeditious-saliency-guided-mix-up-through82.16
hd-cnn-hierarchical-deep-convolutional-neural67.4
beta-rank-a-robust-convolutional-filter74.01
splitnet-divide-and-co-training87.44
stochastic-subsampling-with-average-pooling72.537
aggregating-nested-transformers82.56
when-vision-transformers-outperform-resnets89.1
single-bit-per-weight-deep-convolutional82.95
towards-principled-design-of-deep80.29
fatnet-high-resolution-kernels-for60
an-algorithm-for-routing-vectors-in-sequences93.8
all-you-need-is-a-good-init72.3
gated-convolutional-networks-with-hybrid84.04
fatnet-high-resolution-kernels-for60
with-a-little-help-from-my-friends-nearest79
splitnet-divide-and-co-training85.74
deep-feature-response-discriminative86.31
discriminative-transfer-learning-with-tree63.2
mixmo-mixing-multiple-inputs-for-multiple85.77
online-training-through-time-for-spiking71.05
wavemix-lite-a-resource-efficient-neural70.20
pdo-econvs-partial-differential-operator79.99
tokenmixup-efficient-attention-guided-token83.57
learning-the-connections-in-direct-feedback48.03
pdo-econvs-partial-differential-operator81.6
global-filter-networks-for-image90.3
gpipe-efficient-training-of-giant-neural91.3
maxout-networks61.43
generalizing-pooling-functions-in67.6
averaging-weights-leads-to-wider-optima-and82.15
Model 8885.59
identity-mappings-in-deep-residual-networks77.3
incorporating-convolution-designs-into-visual88
efficientnetv2-smaller-models-and-faster92.3
non-convex-learning-via-replica-exchange82.95
improving-deep-neural-networks-with61.9
automix-unveiling-the-power-of-mixup85.16
resmlp-feedforward-networks-for-image87.0
mish-a-self-regularized-non-monotonic-neural74.41
resnet-strikes-back-an-improved-training86.9
automix-unveiling-the-power-of-mixup83.64
label-ranker-self-aware-preference-for-
densely-connected-convolutional-networks82.62
expeditious-saliency-guided-mix-up-through82.43
splitnet-divide-and-co-training89.46
convmlp-hierarchical-convolutional-mlps-for89.1
large-scale-learning-of-general-visual93.51
fast-and-accurate-deep-network-learning-by75.7
oriented-response-networks83.85
efficientnetv2-smaller-models-and-faster92.2
non-convex-learning-via-replica-exchange74.14
squeeze-and-excitation-networks84.59
im-loss-information-maximization-loss-for70.18
bamboo-building-mega-scale-vision-dataset90.2
escaping-the-big-data-paradigm-with-compact82.72
ml-decoder-scalable-and-versatile95.1
neural-architecture-transfer86.0
understanding-and-enhancing-mixed-sample-data83.95
vision-models-are-more-robust-and-fair-when81.53
deep-residual-networks-with-exponential73.5
expeditious-saliency-guided-mix-up-through81.79
learning-implicitly-recurrent-cnns-through82.57
fatnet-high-resolution-kernels-for66
rethinking-recurrent-neural-networks-and90.27
colornet-investigating-the-importance-of88.4
towards-class-specific-unit76.64
densely-connected-convolutional-networks82.82
fast-autoaugment88.3
going-deeper-with-image-transformers93.1
puzzle-mix-exploiting-saliency-and-local-184.05
expeditious-saliency-guided-mix-up-through82.3
incorporating-convolution-designs-into-visual91.8
Model 13085.38
scalable-bayesian-optimization-using-deep72.6
automatic-data-augmentation-via-invariance81.19
convmlp-hierarchical-convolutional-mlps-for87.4
competitive-multi-scale-convolution72.4
momentum-residual-neural-networks76.38
pso-convolutional-neural-networks-with87.48
economical-ensembles-with-hypernetworks85.00
attend-and-rectify-a-gated-attention82.18
improving-neural-architecture-search-image85.42
stacked-what-where-auto-encoders69.1
non-convex-learning-via-replica-exchange76.55
convmlp-hierarchical-convolutional-mlps-for88.6
learning-the-connections-in-direct-feedback19.49
expeditious-saliency-guided-mix-up-through84.9
imagenet-21k-pretraining-for-the-masses94.2
empirical-evaluation-of-rectified-activations59.8
exact-how-to-train-your-accuracy82.68
how-important-is-weight-symmetry-in48.75
training-very-deep-networks67.8
selective-kernel-networks82.67
non-convex-learning-via-replica-exchange80.14
expeditious-saliency-guided-mix-up-through83.02
economical-ensembles-with-hypernetworks83.06
convolutional-xformers-for-vision60.11
lets-keep-it-simple-using-simple78.37
boosting-discriminative-visual-representation85.50
eeea-net-an-early-exit-evolutionary-neural84.98
polynomial-networks-in-deep-classifiers77.9
universum-prescription-regularization-using67.2
mixup-beyond-empirical-risk-minimization83.20
when-vision-transformers-outperform-resnets85.2
manifold-mixup-better-representations-by81.96
efficientnet-rethinking-model-scaling-for91.7
cutmix-regularization-strategy-to-train86.19
expeditious-saliency-guided-mix-up-through82.32
mixmatch-a-holistic-approach-to-semi74.1
transformer-in-transformer91.1
neural-architecture-transfer87.7
encoding-the-latent-posterior-of-bayesian76.85
muxconv-information-multiplexing-in86.1
update-in-unit-gradient93.95
andhra-bandersnatch-training-neural-networks80.830
averaging-weights-leads-to-wider-optima-and84.16
training-neural-networks-with-local-error79.9
deep-convolutional-decision-jungle-for-image69
regularizing-neural-networks-via-adversarial86.64
when-vision-transformers-outperform-resnets87.6
how-to-use-dropout-correctly-on-residual73.98
neural-architecture-transfer88.3
when-vision-transformers-outperform-resnets82.4
performance-of-gaussian-mixture-model-
batch-normalized-maxout-network-in-network71.1
enaet-self-trained-ensemble-autoencoding83.13
effect-of-large-scale-pre-training-on-full88.54
spectral-representations-for-convolutional68.4
sharpness-aware-minimization-for-efficiently-136.07
fractional-max-pooling73.6
asam-adaptive-sharpness-aware-minimization89.90
19040992581.6
dlme-deep-local-flatness-manifold-embedding66.1
learning-activation-functions-to-improve-deep69.2
spatially-sparse-convolutional-neural75.7
wavemix-lite-a-resource-efficient-neural85.09
boosting-discriminative-visual-representation84.42
large-scale-evolution-of-image-classifiers77
regularizing-neural-networks-via-adversarial78.49
deep-convolutional-neural-networks-as-generic67.7
splitnet-divide-and-co-training86.90
escaping-the-big-data-paradigm-with-compact77.31
efficientnetv2-smaller-models-and-faster91.5
gated-convolutional-networks-with-hybrid83.46
neural-architecture-transfer87.5
on-the-performance-analysis-of-momentum81.44
expeditious-saliency-guided-mix-up-through81.49
deeply-supervised-nets65.4
gated-attention-coding-for-training-high80.45
pdo-econvs-partial-differential-operator72.87
grouped-pointwise-convolutions-reduce71.36
contextual-classification-using-self83.2
expeditious-saliency-guided-mix-up-through80.6