HyperAIHyperAI
7 days ago

Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection

Jingbiao Mei, Jinghong Chen, Guangyu Yang, Weizhe Lin, Bill Byrne
Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection
Abstract

Hateful memes have become a significant concern on the Internet, necessitating robust automated detection systems. While LMMs have shown promise in hateful meme detection, they face notable challenges like sub-optimal performance and limited out-of-domain generalization capabilities. Recent studies further reveal the limitations of both SFT and in-context learning when applied to LMMs in this setting. To address these issues, we propose a robust adaptation framework for hateful meme detection that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. Experiments on six meme classification datasets show that our approach achieves state-of-the-art performance, outperforming larger agentic systems. Moreover, our method generates higher-quality rationales for explaining hateful content compared to standard SFT, enhancing model interpretability.

Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection | Latest Papers | HyperAI