Natural Language Inference On Lidirus
Metriken
MCC
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | MCC |
---|---|
Modell 1 | 0.218 |
Modell 2 | 0.209 |
Modell 3 | 0.01 |
Modell 4 | -0.013 |
Modell 5 | 0.178 |
unreasonable-effectiveness-of-rule-based | 0 |
Modell 7 | 0.096 |
unreasonable-effectiveness-of-rule-based | 0 |
mt5-a-massively-multilingual-pre-trained-text | 0.061 |
Modell 10 | 0.231 |
Modell 11 | 0.224 |
Modell 12 | 0.124 |
Modell 13 | 0.189 |
Modell 14 | 0 |
unreasonable-effectiveness-of-rule-based | 0.147 |
Modell 16 | 0.32 |
Modell 17 | 0.267 |
Modell 18 | 0.235 |
Modell 19 | 0.191 |
Modell 20 | 0.339 |
russiansuperglue-a-russian-language | 0.626 |
russiansuperglue-a-russian-language | 0.06 |