HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Adversarial Robustness
Adversarial Robustness On Advglue
Adversarial Robustness On Advglue
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
Repository
DeBERTa (single model)
0.6086
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
ELECTRA (single model)
0.4169
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
T5 (single model)
0.5682
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
SMART_RoBERTa (single model)
0.5371
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
FreeLB (single model)
0.5048
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
InfoBERT (single model)
0.4603
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
ALBERT (single model)
0.5922
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
BERT (single model)
0.3369
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
RoBERTa (single model)
0.5021
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
SMART_BERT (single model)
0.3029
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
-
0 of 10 row(s) selected.
Previous
Next
Adversarial Robustness On Advglue | SOTA | HyperAI