Common Sense Reasoning On Parus
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
Model 1 | 0.574 |
unreasonable-effectiveness-of-rule-based | 0.498 |
russiansuperglue-a-russian-language | 0.486 |
Model 4 | 0.908 |
Model 5 | 0.508 |
Model 6 | 0.766 |
Model 7 | 0.528 |
unreasonable-effectiveness-of-rule-based | 0.478 |
Model 9 | 0.598 |
Model 10 | 0.508 |
Model 11 | 0.584 |
mt5-a-massively-multilingual-pre-trained-text | 0.504 |
unreasonable-effectiveness-of-rule-based | 0.48 |
Model 14 | 0.562 |
russiansuperglue-a-russian-language | 0.982 |
Model 16 | 0.492 |
Model 17 | 0.66 |
Model 18 | 0.498 |
Model 19 | 0.498 |
Model 20 | 0.476 |
Model 21 | 0.676 |
Model 22 | 0.554 |