Multimodal Reasoning On Rebus
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
rebus-a-robust-evaluation-benchmark-of-1 | 0.6 |
rebus-a-robust-evaluation-benchmark-of-1 | 0.9 |
rebus-a-robust-evaluation-benchmark-of-1 | 0.9 |
rebus-a-robust-evaluation-benchmark-of-1 | 1.8 |
rebus-a-robust-evaluation-benchmark-of-1 | 13.2 |
rebus-a-robust-evaluation-benchmark-of-1 | 1.5 |
rebus-a-robust-evaluation-benchmark-of-1 | 0.9 |
rebus-a-robust-evaluation-benchmark-of-1 | 24.0 |