HyperAI

Natural Language Inference On Rcb

Metriken

Accuracy
Average F1

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameAccuracyAverage F1
Modell 10.4180.302
Modell 20.5180.357
Modell 30.5460.406
Modell 40.4630.367
Modell 50.4980.306
russiansuperglue-a-russian-language0.7020.68
Modell 70.5090.333
Modell 8 0.4840.417
Modell 90.4730.356
Modell 100.4470.408
Modell 110.4520.371
Modell 120.4450.367
mt5-a-massively-multilingual-pre-trained-text0.4540.366
Modell 140.50.356
Modell 150.4860.351
Modell 160.4680.307
unreasonable-effectiveness-of-rule-based0.4380.4
unreasonable-effectiveness-of-rule-based0.3740.319
Modell 190.4610.372
Modell 200.4840.452
russiansuperglue-a-russian-language0.4410.301
unreasonable-effectiveness-of-rule-based0.4840.217