HyperAI

Toxic Comment Classification On Civil

Metriken

AUROC

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameAUROC
pytorch-frame-a-modular-framework-for-multi0.865
a-benchmark-for-toxic-comment-classification-
a-benchmark-for-toxic-comment-classification0.966
a-benchmark-for-toxic-comment-classification0.9526
a-benchmark-for-toxic-comment-classification-
a-benchmark-for-toxic-comment-classification-
a-benchmark-for-toxic-comment-classification0.979
a-benchmark-for-toxic-comment-classification-
pytorch-frame-a-modular-framework-for-multi0.882
pytorch-frame-a-modular-framework-for-multi0.947
a-benchmark-for-toxic-comment-classification-
a-benchmark-for-toxic-comment-classification0.9804
palm-2-technical-report-10.7596
pytorch-frame-a-modular-framework-for-multi0.97
a-benchmark-for-toxic-comment-classification0.9818
a-benchmark-for-toxic-comment-classification0.9813
palm-2-technical-report-10.8535
a-benchmark-for-toxic-comment-classification0.9639
a-benchmark-for-toxic-comment-classification0.9791
pytorch-frame-a-modular-framework-for-multi0.945
pytorch-frame-a-modular-framework-for-multi0.885
a-benchmark-for-toxic-comment-classification0.979