Common Sense Reasoning On Rucos
Metriken
Average F1
EM
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Average F1 | EM |
---|---|---|
Modell 1 | 0.74 | 0.716 |
Modell 2 | 0.21 | 0.202 |
Modell 3 | 0.29 | 0.29 |
Modell 4 | 0.68 | 0.658 |
Modell 5 | 0.92 | 0.924 |
Modell 6 | 0.73 | 0.716 |
Modell 7 | 0.86 | 0.859 |
Modell 8 | 0.21 | 0.204 |
Modell 9 | 0.67 | 0.665 |
mt5-a-massively-multilingual-pre-trained-text | 0.57 | 0.562 |
russiansuperglue-a-russian-language | 0.93 | 0.89 |
Modell 12 | 0.79 | 0.752 |
unreasonable-effectiveness-of-rule-based | 0.25 | 0.247 |
Modell 14 | 0.23 | 0.224 |
russiansuperglue-a-russian-language | 0.26 | 0.252 |
unreasonable-effectiveness-of-rule-based | 0.26 | 0.257 |
Modell 17 | 0.32 | 0.314 |
Modell 18 | 0.35 | 0.347 |
Modell 19 | 0.36 | 0.351 |
Modell 20 | 0.81 | 0.764 |
Modell 21 | 0.22 | 0.218 |
unreasonable-effectiveness-of-rule-based | 0.25 | 0.247 |