Chinese Word Segmentation On Msra
Metrics
F1
Results
Performance results of various models on this benchmark
Model Name | F1 | Paper Title | Repository |
---|---|---|---|
Pre-trained+bigram+ LSTM+CRF | 97.4 | - | - |
BABERT | 98.44 | Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling | |
BABERT-LE | 98.63 | Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling |
0 of 3 row(s) selected.