Linguistic Acceptability On Cola Dev
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy | Paper Title | Repository |
---|---|---|---|
XLM-R (pre-trained) + TDA | 73 | Acceptability Judgements via Examining the Topology of Attention Maps | |
TinyBERT-6 67M | 54 | TinyBERT: Distilling BERT for Natural Language Understanding | |
En-BERT + TDA | 88.6 | Acceptability Judgements via Examining the Topology of Attention Maps | |
En-BERT (pre-trained) + TDA | - | Acceptability Judgements via Examining the Topology of Attention Maps | |
DeBERTa (large) | 69.5 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention | |
Synthesizer (R+V) | 53.3 | Synthesizer: Rethinking Self-Attention in Transformer Models |
0 of 6 row(s) selected.