Text Generation On Coco Captions
Metriken
BLEU-2
BLEU-3
BLEU-4
BLEU-5
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | BLEU-2 | BLEU-3 | BLEU-4 | BLEU-5 |
---|---|---|---|---|
long-text-generation-via-adversarial-training | 0.950 | 0.880 | 0.778 | 0.686 |
seqgan-sequence-generative-adversarial-nets | 0.831 | 0.642 | 0.521 | 0.427 |
long-text-generation-via-adversarial-training | 0.910 | 0.713 | O.753 | 0.590 |
relgan-relational-generative-adversarial | 0.849 | 0.687 | 0.502 | - |
adversarial-ranking-for-language-generation | 0.850 | 0.672 | 0.557 | 0.544 |