Question Answering On Danetqa
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
russiansuperglue-a-russian-language | 0.915 |
Modell 2 | 0.624 |
Modell 3 | 0.82 |
mt5-a-massively-multilingual-pre-trained-text | 0.657 |
unreasonable-effectiveness-of-rule-based | 0.503 |
Modell 6 | 0.634 |
Modell 7 | 0.639 |
Modell 8 | 0.61 |
Modell 9 | 0.773 |
russiansuperglue-a-russian-language | 0.621 |
Modell 11 | 0.637 |
Modell 12 | 0.711 |
unreasonable-effectiveness-of-rule-based | 0.52 |
Modell 14 | 0.59 |
Modell 15 | 0.606 |
Modell 16 | 0.697 |
Modell 17 | 0.917 |
Modell 18 | 0.604 |
Modell 19 | 0.675 |
Modell 20 | 0.732 |
Modell 21 | 0.712 |
unreasonable-effectiveness-of-rule-based | 0.642 |