Image Captioning
Image captioning aims to accurately describe the content of input images using natural language generation techniques. This task integrates technologies from both computer vision and natural language processing fields, typically employing an encoder-decoder framework to transform image information into intermediate representations, which are then decoded into descriptive texts. The primary evaluation metrics include BLEU and CIDER, while common datasets used for this purpose are nocaps and COCO. Image captioning holds significant application value in areas such as assisting visually impaired individuals in understanding images, automated content tagging, and intelligent image search.
AIC-ICC
BanglaLekhaImageCaptions
CNN + 1D CNN
ChEBI-20
GIT-Mol
MS COCO
ExpansionNet v2
COCO Captions
VAST
COCO Captions test
From Captions to Visual Concepts and Back
Conceptual Captions
ClipCap (MLP + GPT2 tuning)
Flickr30k Captions test
Unified VLP
FlickrStyle10K
CapDec
foundation-multimodal-models/DetailCaps-4870
IU X-Ray
Localized Narratives
MS-COCO
NeuSyRE
MSCOCO
CapDec
nocaps entire
nocaps in-domain
VinVL (Microsoft Cognitive Services + MSR)
nocaps near-domain
GIT2, Single Model
nocaps out-of-domain
PaLI
nocaps val
Prismer
nocaps-val-in-domain
nocaps-val-near-domain
nocaps-val-out-domain
nocaps-val-overall
nocaps-XD entire
GIT2
nocaps-XD in-domain
GIT2
nocaps-XD near-domain
GIT2
nocaps-XD out-of-domain
GIT2
Object HalBench
Peir Gross
BiomedGPT
SCICAP
CNN+LSTM (Vision only, First sentence)
TextCaps 2020
VizWiz 2020 test
VizWiz 2020 test-dev
WHOOPS!