HyperAI
HyperAI
Startseite
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Seite durchsuchen…
⌘
K
Startseite
SOTA
Hassprachenerkennung
Hate Speech Detection On Hatexplain
Hate Speech Detection On Hatexplain
Metriken
AUROC
Accuracy
Macro F1
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
AUROC
Accuracy
Macro F1
Paper Title
Repository
CNN-GRU [LIME]
0.793
0.629
0.614
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT [Attn]
0.843
0.69
0.674
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BiRNN-HateXplain [Attn]
0.805
-
0.629
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
XG-HSI-BiRNN
-
0.742
-
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks
-
XG-HSI-BERT
-
0.751
-
Explainable Identification of Hate Speech towards Islam using Graph Neural Networks
-
BiRNN-Attn [Attn]
0.795
0.621
-
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT-HateXplain [Attn]
0.851
0.698
0.687
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT-HateXplain [LIME]
0.851
-
0.687
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BiRNN [LIME]
0.767
0.595
0.575
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
BERT-RP
0.853
0.707
0.693
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
BERT-MRP
0.862
0.704
0.699
Why Is It Hate Speech? Masked Rationale Prediction for Explainable Hate Speech Detection
0 of 11 row(s) selected.
Previous
Next