Linguistic Acceptability On Rucola
Metrics
Accuracy
MCC
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy | MCC |
---|---|---|
rucola-russian-corpus-of-linguistic | 74.3 | 0.42 |
rucola-russian-corpus-of-linguistic | 75.06 | 0.44 |
rucola-russian-corpus-of-linguistic | - | 0.15 |
can-bert-eat-rucola-topological-data-analysis | 80.1 | 0.478 |
rucola-russian-corpus-of-linguistic | 53.82 | 0.30 |
can-bert-eat-rucola-topological-data-analysis | 85.7 | 0.594 |
rucola-russian-corpus-of-linguistic | 61.13 | 0.13 |
rucola-russian-corpus-of-linguistic | 79.34 | 0.53 |
rucola-russian-corpus-of-linguistic | 68.41 | 0.25 |