Handwritten Text Recognition On Belfort
Metriken
CER (%)
WER (%)
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
| Paper Title | |||
|---|---|---|---|
| PyLaia (human transcriptions + random split) | 10.54 | 28.11 | Handwritten Text Recognition from Crowdsourced Annotations |
| PyLaia (human transcriptions + agreement-based split) | 5.57 | 19.12 | Handwritten Text Recognition from Crowdsourced Annotations |
| PyLaia (rover consensus + agreement-based split) | 4.95 | 17.08 | Handwritten Text Recognition from Crowdsourced Annotations |
| PyLaia (all transcriptions + agreement-based split) | 4.34 | 15.14 | Handwritten Text Recognition from Crowdsourced Annotations |
0 of 4 row(s) selected.