Linguistic Acceptability On Cola Dev
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
模型名称 | Accuracy | Paper Title | Repository |
---|---|---|---|
XLM-R (pre-trained) + TDA | 73 | Acceptability Judgements via Examining the Topology of Attention Maps | |
TinyBERT-6 67M | 54 | TinyBERT: Distilling BERT for Natural Language Understanding | |
En-BERT + TDA | 88.6 | Acceptability Judgements via Examining the Topology of Attention Maps | |
En-BERT (pre-trained) + TDA | - | Acceptability Judgements via Examining the Topology of Attention Maps | |
DeBERTa (large) | 69.5 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention | |
Synthesizer (R+V) | 53.3 | Synthesizer: Rethinking Self-Attention in Transformer Models |
0 of 6 row(s) selected.