Natural Language Inference On Terra
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
Model 1 | 0.605 |
Model 2 | 0.488 |
Model 3 | 0.617 |
Model 4 | 0.703 |
Model 5 | 0.637 |
Model 6 | 0.642 |
mt5-a-massively-multilingual-pre-trained-text | 0.561 |
Model 8 | 0.573 |
unreasonable-effectiveness-of-rule-based | 0.483 |
Model 10 | 0.704 |
Model 11 | 0.637 |
Model 12 | 0.871 |
Model 13 | 0.64 |
Model 14 | 0.801 |
Model 15 | 0.747 |
unreasonable-effectiveness-of-rule-based | 0.549 |
Model 17 | 0.692 |
russiansuperglue-a-russian-language | 0.92 |
unreasonable-effectiveness-of-rule-based | 0.513 |
Model 20 | 0.654 |
Model 21 | 0.505 |
russiansuperglue-a-russian-language | 0.471 |