HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Neural Architecture Search
Neural Architecture Search On Cifar 10
Neural Architecture Search On Cifar 10
Metrics
Parameters
Search Time (GPU days)
Top-1 Error Rate
Results
Performance results of various models on this benchmark
Columns
Model Name
Parameters
Search Time (GPU days)
Top-1 Error Rate
Paper Title
GDAS
-
0.21
3.4%
Searching for A Robust Neural Architecture in Four GPU Hours
Bonsai-Net
2.9M
0.10
3.35%
Bonsai-Net: One-Shot Neural Architecture Search via Differentiable Pruners
Net2 (2)
-
-
3.3%
Efficacy of Neural Prediction-Based Zero-Shot NAS
μDARTS
-
0.1
3.277%
$μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search
NN-MASS- CIFAR-C
3.82M
0
3.18%
How does topology of neural architectures impact gradient propagation and model performance?
NN-MASS- CIFAR-A
5.02M
0
3.0%
How does topology of neural architectures impact gradient propagation and model performance?
DARTS (first order)
3.3
1.5
3%
DARTS: Differentiable Architecture Search
NASGEP
-
1
2.82%
Optimizing Neural Architecture Search using Limited GPU Time in a Dynamic Search Space: A Gene Expression Programming Approach
AlphaX-1 (cutout NASNet)
-
224
2.82%
AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search
DARTS (second order)
3.3
4
2.76%
DARTS: Differentiable Architecture Search
SETN (T=1K) + CutOut
-
1.8
2.69%
One-Shot Neural Architecture Search via Self-Evaluated Template Network
DARTS-PRIME
3.7M
0.5
2.62%
DARTS-PRIME: Regularization and Scheduling Improve Constrained Optimization in Differentiable NAS
NAT-M1
4.3M
1.0
2.6%
Neural Architecture Transfer
PC-DARTS
3.6M
0.1
2.57%
PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search
arch2vec
3.6M
10.5
2.56%
Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?
FairDARTS-a
2.8M
0.25
2.54%
Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search
MSR-DARTS
4.0M
0.3
2.54%
MSR-DARTS: Minimum Stable Rank of Differentiable Architecture Search
Soft Parameter Sharing
-
0.7
2.53%
Learning Implicitly Recurrent CNNs Through Parameter Sharing
β-DARTS
-
-
2.53%
$β$-DARTS: Beta-Decay Regularization for Differentiable Architecture Search
TNASP
3.7M
0.3
2.52%
TNASP: A Transformer-based NAS Predictor with a Self-evolution Framework
0 of 41 row(s) selected.
Previous
Next
Neural Architecture Search On Cifar 10 | SOTA | HyperAI