HyperAI

Toxic Comment Classification On Civil

Metrics

AUROC

Results

Performance results of various models on this benchmark

Model Name
AUROC
Paper TitleRepository
LightGBM + RoBERTa embedding0.865PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
BiLSTM-A benchmark for toxic comment classification on Civil Comments dataset
Unfreeze Glove ResNet 440.966A benchmark for toxic comment classification on Civil Comments dataset
Compact Convolutional Transformer (CCT)0.9526A benchmark for toxic comment classification on Civil Comments dataset
BiGRU-A benchmark for toxic comment classification on Civil Comments dataset
Freeze Glove ResNet 44-A benchmark for toxic comment classification on Civil Comments dataset
BERTweet0.979A benchmark for toxic comment classification on Civil Comments dataset
XLNet-A benchmark for toxic comment classification on Civil Comments dataset
ResNet + RoBERTa embedding0.882PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
Trompt + OpenAI embedding0.947PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
XLM RoBERTa-A benchmark for toxic comment classification on Civil Comments dataset
DistilBERT0.9804A benchmark for toxic comment classification on Civil Comments dataset
PaLM 2 (zero-shot)0.7596PaLM 2 Technical Report
ResNet + RoBERTa finetune0.97PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
RoBERTa Focal Loss0.9818A benchmark for toxic comment classification on Civil Comments dataset
RoBERTa BCE0.9813A benchmark for toxic comment classification on Civil Comments dataset
PaLM 2 (few-shot, k=10)0.8535PaLM 2 Technical Report
Unfreeze Glove ResNet 560.9639A benchmark for toxic comment classification on Civil Comments dataset
HateBERT0.9791A benchmark for toxic comment classification on Civil Comments dataset
ResNet + OpenAI embedding0.945PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
0 of 22 row(s) selected.