HyperAI

Natural Language Inference On Terra

Metriken

Accuracy

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameAccuracy
Modell 10.605
Modell 20.488
Modell 30.617
Modell 40.703
Modell 50.637
Modell 60.642
mt5-a-massively-multilingual-pre-trained-text0.561
Modell 80.573
unreasonable-effectiveness-of-rule-based0.483
Modell 100.704
Modell 110.637
Modell 120.871
Modell 130.64
Modell 140.801
Modell 150.747
unreasonable-effectiveness-of-rule-based0.549
Modell 170.692
russiansuperglue-a-russian-language0.92
unreasonable-effectiveness-of-rule-based0.513
Modell 200.654
Modell 210.505
russiansuperglue-a-russian-language0.471