HyperAI

Hate Speech Detection On Hatexplain

Metrics

AUROC
Accuracy
Macro F1

Results

Performance results of various models on this benchmark

Comparison Table
Model NameAUROCAccuracyMacro F1
hatexplain-a-benchmark-dataset-for0.7930.6290.614
hatexplain-a-benchmark-dataset-for0.8430.690.674
hatexplain-a-benchmark-dataset-for0.805-0.629
explainable-identification-of-hate-speech-0.742-
explainable-identification-of-hate-speech-0.751-
hatexplain-a-benchmark-dataset-for0.7950.621-
hatexplain-a-benchmark-dataset-for0.8510.6980.687
hatexplain-a-benchmark-dataset-for0.851-0.687
hatexplain-a-benchmark-dataset-for0.7670.5950.575
why-is-it-hate-speech-masked-rationale-10.8530.7070.693
why-is-it-hate-speech-masked-rationale-10.8620.7040.699