HyperAI

Meme Classification On Hateful Memes

Metriken

Accuracy
ROC-AUC

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameAccuracyROC-AUC
improved-fine-tuning-of-large-multimodal-10.8210.911
flamingo-a-visual-language-model-for-few-shot-1-0.700
vilio-state-of-the-art-visio-linguistic0.6950.825
learning-transferable-visual-models-from-0.661
the-hateful-memes-challenge-detecting-hate0.8470.8265
vision-models-are-more-robust-and-fair-when-0.734
enhance-multimodal-transformer-with-external0.7320.845
pro-cap-leveraging-a-frozen-vision-language0.7230.809
visual-program-distillation-distilling-tools-0.892
the-hateful-memes-challenge-detecting-hate0.6950.754
flamingo-a-visual-language-model-for-few-shot-1-0.866
mapping-memes-to-words-for-multimodal-hateful-0.855
detecting-hate-speech-in-memes-using0.7650.811
improving-hateful-memes-detection-via0.7880.870
hate-clipper-multimodal-hateful-meme-0.858
improved-fine-tuning-of-large-multimodal-10.8090.897
improved-fine-tuning-of-large-multimodal-10.7910.884