Question Answering On Danetqa
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
russiansuperglue-a-russian-language | 0.915 |
Model 2 | 0.624 |
Model 3 | 0.82 |
mt5-a-massively-multilingual-pre-trained-text | 0.657 |
unreasonable-effectiveness-of-rule-based | 0.503 |
Model 6 | 0.634 |
Model 7 | 0.639 |
Model 8 | 0.61 |
Model 9 | 0.773 |
russiansuperglue-a-russian-language | 0.621 |
Model 11 | 0.637 |
Model 12 | 0.711 |
unreasonable-effectiveness-of-rule-based | 0.52 |
Model 14 | 0.59 |
Model 15 | 0.606 |
Model 16 | 0.697 |
Model 17 | 0.917 |
Model 18 | 0.604 |
Model 19 | 0.675 |
Model 20 | 0.732 |
Model 21 | 0.712 |
unreasonable-effectiveness-of-rule-based | 0.642 |