Text Classification On Lot Insts
Metriken
Accuracy
Macro-F1
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Accuracy | Macro-F1 | Paper Title | Repository |
---|---|---|---|---|
Naive Bayes | 72.2 | 50.2 | Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset | |
FastText | 74.93 | 44.38 | Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset | |
CD-V1 | 79.97 | 59.64 | Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset | |
sCool | 76.72 | 52.41 | Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset | |
Character-BERT+RS | 83.73 | 65.9 | Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset |
0 of 5 row(s) selected.