Chinese Word Segmentation On Pku
Metriken
F1
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | F1 | Paper Title | Repository |
---|---|---|---|
BABERT-LE | 96.84 | Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling | |
WMSeg + ZEN | 96.53 | Improving Chinese Word Segmentation with Wordhood Memory Networks | |
Glyce + BERT | 96.7 | Glyce: Glyph-vectors for Chinese Character Representations | |
BABERT | 96.70 | Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling |
0 of 4 row(s) selected.