HyperAI

Question Answering On Danetqa

Metriken

Accuracy

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameAccuracy
russiansuperglue-a-russian-language0.915
Modell 20.624
Modell 30.82
mt5-a-massively-multilingual-pre-trained-text0.657
unreasonable-effectiveness-of-rule-based0.503
Modell 60.634
Modell 70.639
Modell 80.61
Modell 90.773
russiansuperglue-a-russian-language0.621
Modell 110.637
Modell 120.711
unreasonable-effectiveness-of-rule-based0.52
Modell 140.59
Modell 150.606
Modell 160.697
Modell 170.917
Modell 180.604
Modell 190.675
Modell 200.732
Modell 210.712
unreasonable-effectiveness-of-rule-based0.642