HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Natural Language Understanding
Natural Language Understanding On Glue
Natural Language Understanding On Glue
Metrics
Average
Results
Performance results of various models on this benchmark
Columns
Model Name
Average
Paper Title
MT-DNN-SMART
89.9
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
BERT-LARGE
82.1
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
0 of 2 row(s) selected.
Previous
Next
Natural Language Understanding On Glue | SOTA | HyperAI