Image Captioning On Scicap
평가 지표
BLEU-4
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | BLEU-4 | Paper Title | Repository |
---|---|---|---|
CNN+LSTM (Text only, Caption w/ <=100 words) | 0.0165 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Vision + Text, Caption w/ <=100 words) | 0.0168 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Vision only, Single-Sent Caption) | 0.0207 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Text only, Single-Sent Caption) | 0.0212 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Vision + Text, First sentence) | 0.0205 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Text only, First sentence) | 0.0213 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Vision only, First sentence) | 0.0219 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Vision only, Caption w/ <=100 words) | 0.0172 | SciCap: Generating Captions for Scientific Figures | |
CNN+LSTM (Vision + Text, Single-Sent Caption) | 0.0202 | SciCap: Generating Captions for Scientific Figures |
0 of 9 row(s) selected.