HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Hate Speech Detection
Hate Speech Detection On Hatexplain
Hate Speech Detection On Hatexplain
Metrics
AUROC
Accuracy
Macro F1
Results
Performance results of various models on this benchmark
Columns
Model Name
AUROC
Accuracy
Macro F1
Paper Title
Repository
CNN-GRU [LIME]
0.793
0.629
0.614
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
BERT [Attn]
0.843
0.69
0.674
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
BiRNN-HateXplain [Attn]
0.805
-
0.629
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
XG-HSI-BiRNN
-
0.742
-
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks
-
XG-HSI-BERT
-
0.751
-
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks
-
BiRNN-Attn [Attn]
0.795
0.621
-
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
BERT-HateXplain [Attn]
0.851
0.698
0.687
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
BERT-HateXplain [LIME]
0.851
-
0.687
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
BiRNN [LIME]
0.767
0.595
0.575
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
-
BERT-RP
0.853
0.707
0.693
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
-
BERT-MRP
0.862
0.704
0.699
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
-
0 of 11 row(s) selected.
Previous
Next