Text Generation On Dailydialog
Metrics
BLEU-1
BLEU-2
BLEU-3
BLEU-4
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 |
---|---|---|---|---|
an-auto-encoder-matching-model-for-learning | 14.17 | 5.69 | 3.78 | 2.84 |