Hate Speech Detection On Ethos Binary
Métriques
Classification Accuracy
F1-score
Precision
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Classification Accuracy | F1-score | Precision | Paper Title | Repository |
---|---|---|---|---|---|
Random Forests | 0.6504 | 0.6441 | 64.69 | ETHOS: an Online Hate Speech Detection Dataset | |
OPT-175B (one-shot) | - | 0.713 | - | OPT: Open Pre-trained Transformer Language Models | |
BiLSTM+Attention+FT | 0.7734 | 0.768 | 77.76 | ETHOS: an Online Hate Speech Detection Dataset | |
BERT | 0.7664 | 0.7883 | 79.17 | ETHOS: an Online Hate Speech Detection Dataset | |
OPT-175B (zero-shot) | - | 0.667 | - | OPT: Open Pre-trained Transformer Language Models | |
BiLSTM + static BE | 0.8015 | 0.7971 | 0.8037 | Hate speech detection using static BERT embeddings | - |
Davinci (few-shot) | - | 0.354 | - | OPT: Open Pre-trained Transformer Language Models | |
SVM | 0.6643 | 0.6607 | 66.47 | ETHOS: an Online Hate Speech Detection Dataset | |
OPT-175B (few-shot) | - | 0.759 | - | OPT: Open Pre-trained Transformer Language Models | |
Davinci (one-shot) | - | 0.616 | - | OPT: Open Pre-trained Transformer Language Models | |
Davinci (zero-shot) | - | 0.628 | - | OPT: Open Pre-trained Transformer Language Models | |
CNN+Attention+FT+GV | 0.7515 | 0.7441 | 74.92 | ETHOS: an Online Hate Speech Detection Dataset |
0 of 12 row(s) selected.