Chinese Word Segmentation On Msr
評価指標
F1
Precision
Recall
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | F1 | Precision | Recall | Paper Title | Repository |
---|---|---|---|---|---|
Glyce + BERT | 98.3 | 98.2 | 98.3 | Glyce: Glyph-vectors for Chinese Character Representations | |
BABERT-LE | 98.63 | - | - | Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling | |
ZEN (Random Init) | 97.89 | - | - | ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations | |
BABERT | 98.44 | - | - | Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling | |
WMSeg + ZEN | 98.40 | - | - | Improving Chinese Word Segmentation with Wordhood Memory Networks | |
ZEN (Init with Chinese BERT) | 98.35 | - | - | ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations |
0 of 6 row(s) selected.