Hate Speech Detection On Hatexplain
Metrics
AUROC
Accuracy
Macro F1
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | AUROC | Accuracy | Macro F1 |
---|---|---|---|
hatexplain-a-benchmark-dataset-for | 0.793 | 0.629 | 0.614 |
hatexplain-a-benchmark-dataset-for | 0.843 | 0.69 | 0.674 |
hatexplain-a-benchmark-dataset-for | 0.805 | - | 0.629 |
explainable-identification-of-hate-speech | - | 0.742 | - |
explainable-identification-of-hate-speech | - | 0.751 | - |
hatexplain-a-benchmark-dataset-for | 0.795 | 0.621 | - |
hatexplain-a-benchmark-dataset-for | 0.851 | 0.698 | 0.687 |
hatexplain-a-benchmark-dataset-for | 0.851 | - | 0.687 |
hatexplain-a-benchmark-dataset-for | 0.767 | 0.595 | 0.575 |
why-is-it-hate-speech-masked-rationale-1 | 0.853 | 0.707 | 0.693 |
why-is-it-hate-speech-masked-rationale-1 | 0.862 | 0.704 | 0.699 |