Linguistic Acceptability On Rucola
Métriques
Accuracy
MCC
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy | MCC | Paper Title | Repository |
|---|---|---|---|---|
| ruBERT | 74.3 | 0.42 | RuCoLA: Russian Corpus of Linguistic Acceptability | |
| RemBERT | 75.06 | 0.44 | RuCoLA: Russian Corpus of Linguistic Acceptability | |
| mBERT | - | 0.15 | RuCoLA: Russian Corpus of Linguistic Acceptability | |
| Ru-BERT+TDA | 80.1 | 0.478 | Can BERT eat RuCoLA? Topological Data Analysis to Explain | |
| ruGPT-3 | 53.82 | 0.30 | RuCoLA: Russian Corpus of Linguistic Acceptability | |
| Ru-RoBERTa+TDA | 85.7 | 0.594 | Can BERT eat RuCoLA? Topological Data Analysis to Explain | |
| XLM-R | 61.13 | 0.13 | RuCoLA: Russian Corpus of Linguistic Acceptability | |
| ruRoBERTa | 79.34 | 0.53 | RuCoLA: Russian Corpus of Linguistic Acceptability | |
| ruT5 | 68.41 | 0.25 | RuCoLA: Russian Corpus of Linguistic Acceptability |
0 of 9 row(s) selected.