Meme Classification On Hateful Memes
Métriques
Accuracy
ROC-AUC
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Accuracy | ROC-AUC |
---|---|---|
improved-fine-tuning-of-large-multimodal-1 | 0.821 | 0.911 |
flamingo-a-visual-language-model-for-few-shot-1 | - | 0.700 |
vilio-state-of-the-art-visio-linguistic | 0.695 | 0.825 |
learning-transferable-visual-models-from | - | 0.661 |
the-hateful-memes-challenge-detecting-hate | 0.847 | 0.8265 |
vision-models-are-more-robust-and-fair-when | - | 0.734 |
enhance-multimodal-transformer-with-external | 0.732 | 0.845 |
pro-cap-leveraging-a-frozen-vision-language | 0.723 | 0.809 |
visual-program-distillation-distilling-tools | - | 0.892 |
the-hateful-memes-challenge-detecting-hate | 0.695 | 0.754 |
flamingo-a-visual-language-model-for-few-shot-1 | - | 0.866 |
mapping-memes-to-words-for-multimodal-hateful | - | 0.855 |
detecting-hate-speech-in-memes-using | 0.765 | 0.811 |
improving-hateful-memes-detection-via | 0.788 | 0.870 |
hate-clipper-multimodal-hateful-meme | - | 0.858 |
improved-fine-tuning-of-large-multimodal-1 | 0.809 | 0.897 |
improved-fine-tuning-of-large-multimodal-1 | 0.791 | 0.884 |