Handwritten Text Recognition On Belfort
평가 지표
CER (%)
WER (%)
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | CER (%) | WER (%) | Paper Title | Repository |
---|---|---|---|---|
PyLaia (human transcriptions + random split) | 10.54 | 28.11 | Handwritten Text Recognition from Crowdsourced Annotations | - |
PyLaia (all transcriptions + agreement-based split) | 4.34 | 15.14 | Handwritten Text Recognition from Crowdsourced Annotations | - |
PyLaia (human transcriptions + agreement-based split) | 5.57 | 19.12 | Handwritten Text Recognition from Crowdsourced Annotations | - |
PyLaia (rover consensus + agreement-based split) | 4.95 | 17.08 | Handwritten Text Recognition from Crowdsourced Annotations | - |
0 of 4 row(s) selected.