HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Events
Wiki
SOTA
LLM Models
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Natural Language Understanding
Natural Language Understanding On Glue
Natural Language Understanding On Glue
Metrics
Average
Results
Performance results of various models on this benchmark
Comparison Table
Model Name
Average
smart-robust-and-efficient-fine-tuning-for
89.9
bert-pre-training-of-deep-bidirectional
82.1