HyperAI

Natural Language Inference On Terra

Metrics

Accuracy

Results

Performance results of various models on this benchmark

Comparison Table
Model NameAccuracy
Model 10.605
Model 20.488
Model 30.617
Model 40.703
Model 50.637
Model 60.642
mt5-a-massively-multilingual-pre-trained-text0.561
Model 80.573
unreasonable-effectiveness-of-rule-based0.483
Model 100.704
Model 110.637
Model 120.871
Model 130.64
Model 140.801
Model 150.747
unreasonable-effectiveness-of-rule-based0.549
Model 170.692
russiansuperglue-a-russian-language0.92
unreasonable-effectiveness-of-rule-based0.513
Model 200.654
Model 210.505
russiansuperglue-a-russian-language0.471