Question Answering On Drop
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
large-language-models-can-self-improve | 78.2 |
large-language-models-can-self-improve | 83 |
large-language-models-can-self-improve | 71.7 |
large-language-models-can-self-improve | 60 |
large-language-models-can-self-improve | 70.6 |
large-language-models-can-self-improve | 76.2 |