Command Palette
Search for a command to run...
Linguistic Acceptability On Cola Dev
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
| Paper Title | ||
|---|---|---|
| En-BERT + TDA | 88.6 | Acceptability Judgements via Examining the Topology of Attention Maps |
| XLM-R (pre-trained) + TDA | 73 | Acceptability Judgements via Examining the Topology of Attention Maps |
| DeBERTa (large) | 69.5 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention |
| TinyBERT-6 67M | 54 | TinyBERT: Distilling BERT for Natural Language Understanding |
| Synthesizer (R+V) | 53.3 | Synthesizer: Rethinking Self-Attention in Transformer Models |
| En-BERT (pre-trained) + TDA | - | Acceptability Judgements via Examining the Topology of Attention Maps |
0 of 6 row(s) selected.