HyperAI초신경

Image Classification On Mnist

평가 지표

Percentage error

평가 결과

이 벤치마크에서 각 모델의 성능 결과

비교 표
모델 이름Percentage error
multi-column-deep-neural-networks-for-image0.23
vision-models-are-more-robust-and-fair-when0.58
on-second-order-behaviour-in-augmented-neural0.37
enhanced-image-classification-with-a-fast0.4
pcanet-a-simple-deep-learning-baseline-for0.6
learning-in-wilson-cowan-model-for-
batch-normalized-maxout-network-in-network0.24
cnn-filter-db-an-empirical-investigation-of-
deep-fried-convnets0.7
exact-how-to-train-your-accuracy0.33
lets-keep-it-simple-using-simple0.25
network-in-network0.5
the-tsetlin-machine-a-game-theoretic-bandit1.8
spinalnet-deep-neural-network-with-gradual-10.28
spike-time-displacement-based-error-
explaining-and-harnessing-adversarial0.8
personalized-federated-learning-with-hidden-
learning-in-wilson-cowan-model-for-
rmdl-random-multimodel-deep-learning-for0.18
evaluating-the-performance-of-taaf-for-image0.48%
competitive-multi-scale-convolution0.3
fkan-fractional-kolmogorov-arnold-networks-
dynamic-routing-between-capsules0.25
robust-training-in-high-dimensions-via-block-
trainable-activations-for-image3.0
sparse-activity-and-sparse-connectivity-in0.8
a-block-based-convolutional-neural-network-
performance-of-gaussian-mixture-model-
accelerating-spiking-neural-network-training-
an-evolutionary-approach-to-dynamic-
deep-convolutional-neural-networks-as-generic0.5
trainable-activations-for-image3.6
sparse-networks-from-scratch-faster-training1.26
regularization-of-neural-networks-using0.21
textcaps-handwritten-character-recognition0.29
a-branching-and-merging-convolutional-network0.13
binaryconnect-training-deep-neural-networks1.0
convolutional-sequence-to-sequence-learning1.41
apac-augmented-pattern-classification-with0.23
on-the-importance-of-normalisation-layers-in0.4
on-the-ideal-number-of-groups-for-isometric1.67
a-single-graph-convolution-is-all-you-need1.96
convolutional-clustering-for-unsupervised1.4
unsupervised-feature-learning-with-c-svddnet0.4
trainable-activations-for-image2.8
ensemble-learning-in-cnn-augmented-with-fully0.16
tensorizing-neural-networks1.8
neupde-neural-network-based-ordinary-and0.51
training-very-deep-networks0.5
training-neural-networks-with-local-error0.26
renet-a-recurrent-neural-network-based0.5
improved-training-speed-accuracy-and-data0.53
wavemix-resource-efficient-token-mixing-for0.29
generalizing-pooling-functions-in0.3
parametric-matrix-models2.62
learning-local-discrete-features-in0.20
xnodr-and-xnidr-two-accurate-and-fast-fully-
parametric-matrix-models1.01
the-weighted-tsetlin-machine-compressed1.5
fractional-max-pooling0.3
stacked-what-where-auto-encoders4.76
deeply-supervised-nets0.4
hybrid-orthogonal-projection-and-estimation0.4
the-convolutional-tsetlin-machine0.6
a-novel-lightweight-convolutional-neural0.29
projectionnet-learning-efficient-on-device5.0
stochastic-optimization-of-plain0.17
all-you-need-is-a-good-init0.4
diffprune-neural-network-pruning-with0.6
maxout-networks0.5
exploring-effects-of-hyperdimensional-vectors-
augmented-neural-odes0.37
the-backpropagation-algorithm-implemented-on-
improving-k-means-clustering-performance-with-
efficient-capsnet-capsule-network-with-self0.16
rkan-rational-kolmogorov-arnold-networks-
모델 770.5
augmented-neural-odes1.8
accelerating-spiking-neural-network-training-
convolutional-kernel-networks0.4