Natural Language Inference On Terra
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
Modell 1 | 0.605 |
Modell 2 | 0.488 |
Modell 3 | 0.617 |
Modell 4 | 0.703 |
Modell 5 | 0.637 |
Modell 6 | 0.642 |
mt5-a-massively-multilingual-pre-trained-text | 0.561 |
Modell 8 | 0.573 |
unreasonable-effectiveness-of-rule-based | 0.483 |
Modell 10 | 0.704 |
Modell 11 | 0.637 |
Modell 12 | 0.871 |
Modell 13 | 0.64 |
Modell 14 | 0.801 |
Modell 15 | 0.747 |
unreasonable-effectiveness-of-rule-based | 0.549 |
Modell 17 | 0.692 |
russiansuperglue-a-russian-language | 0.92 |
unreasonable-effectiveness-of-rule-based | 0.513 |
Modell 20 | 0.654 |
Modell 21 | 0.505 |
russiansuperglue-a-russian-language | 0.471 |