Conversational Question Answering On
Metrics
Execution Accuracy
Program Accuracy
Results
Performance results of various models on this benchmark
Model Name | Execution Accuracy | Program Accuracy | Paper Title | Repository |
---|---|---|---|---|
APOLLO | 78.76 | 77.19 | APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning | |
FinQANet (RoBERTa-large) | 68.90 | 68.24 | ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering |
0 of 2 row(s) selected.