Text Generation On Chinese Poems
Metrics
BLEU-2
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU-2 |
---|---|
long-text-generation-via-adversarial-training | 0.456 |
adversarial-ranking-for-language-generation | 0.812 |
seqgan-sequence-generative-adversarial-nets | 0.738 |