Linguistic Acceptability On Cola Dev
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Accuracy | Paper Title | Repository |
|---|---|---|---|
| XLM-R (pre-trained) + TDA | 73 | Acceptability Judgements via Examining the Topology of Attention Maps | |
| TinyBERT-6 67M | 54 | TinyBERT: Distilling BERT for Natural Language Understanding | |
| En-BERT + TDA | 88.6 | Acceptability Judgements via Examining the Topology of Attention Maps | |
| En-BERT (pre-trained) + TDA | - | Acceptability Judgements via Examining the Topology of Attention Maps | |
| DeBERTa (large) | 69.5 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention | |
| Synthesizer (R+V) | 53.3 | Synthesizer: Rethinking Self-Attention in Transformer Models |
0 of 6 row(s) selected.