HyperAI

Question Answering On Danetqa

Metrics

Accuracy

Results

Performance results of various models on this benchmark

Comparison Table
Model NameAccuracy
russiansuperglue-a-russian-language0.915
Model 20.624
Model 30.82
mt5-a-massively-multilingual-pre-trained-text0.657
unreasonable-effectiveness-of-rule-based0.503
Model 60.634
Model 70.639
Model 80.61
Model 90.773
russiansuperglue-a-russian-language0.621
Model 110.637
Model 120.711
unreasonable-effectiveness-of-rule-based0.52
Model 140.59
Model 150.606
Model 160.697
Model 170.917
Model 180.604
Model 190.675
Model 200.732
Model 210.712
unreasonable-effectiveness-of-rule-based0.642