HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Hate Speech Detection
Hate Speech Detection On Hatexplain
Hate Speech Detection On Hatexplain
Métriques
AUROC
Accuracy
Macro F1
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
AUROC
Accuracy
Macro F1
Paper Title
Repository
CNN-GRU [LIME]
0.793
0.629
0.614
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT [Attn]
0.843
0.69
0.674
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BiRNN-HateXplain [Attn]
0.805
-
0.629
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
XG-HSI-BiRNN
-
0.742
-
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks
-
XG-HSI-BERT
-
0.751
-
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks
-
BiRNN-Attn [Attn]
0.795
0.621
-
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT-HateXplain [Attn]
0.851
0.698
0.687
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT-HateXplain [LIME]
0.851
-
0.687
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BiRNN [LIME]
0.767
0.595
0.575
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT-RP
0.853
0.707
0.693
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
-
BERT-MRP
0.862
0.704
0.699
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
-
0 of 11 row(s) selected.
Previous
Next