Medical Code Prediction On Mimic Iii
Metriken
Macro-AUC
Macro-F1
Micro-AUC
Micro-F1
Precision@15
Precision@8
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Macro-AUC | Macro-F1 | Micro-AUC | Micro-F1 | Precision@15 | Precision@8 |
---|---|---|---|---|---|---|
explainable-prediction-of-medical-codes-from | 89.7 | 8.6 | 98.5 | 52.9 | 54.8 | 69.0 |
code-synonyms-do-matter-multiple-synonyms-1 | 95.0 | 10.3 | 99.2 | 58.4 | 59.9 | 75.2 |
explainable-prediction-of-medical-codes-from | 82.2 | 3.8 | 97.1 | 41.7 | 44.5 | 58.5 |
explainable-automated-coding-of-clinical | 88.5 | 3.6 | 98.1 | 40.7 | - | 61.4 |
explainable-prediction-of-medical-codes-from | 56.1 | 1.1 | 93.7 | 27.2 | 41.1 | 54.2 |
icd-coding-from-clinical-text-using-multi | 91.0 | 8.5 | 98.6 | 55.2 | 58.4 | 73.4 |
a-label-attention-model-for-icd-coding-from | 91.9 | 9.9 | 98.8 | 57.5 | 59.1 | 73.8 |
Modell 8 | 91.0 | 9.0 | 99.2 | 55.3 | 58.1 | 72.8 |
read-attend-and-code-pushing-the-limits-of | 94.8 | 12.7 | 99.2 | 58.6 | 60.1 | 75.4 |
explainable-prediction-of-medical-codes-from | - | - | - | 44.1 | - | - |
explainable-prediction-of-medical-codes-from | 80.6 | 4.2 | 96.9 | 41.9 | 44.3 | 58.1 |
explainable-prediction-of-medical-codes-from | 89.5 | 8.8 | 98.6 | 53.9 | 56.1 | 70.9 |
knowledge-injected-prompt-based-fine-tuning | - | 11.8 | - | 59.9 | 61.5 | 77.1 |
automatic-icd-coding-exploiting-discourse | 95.6 | 14.0 | 99.3 | 58.8 | 61.4 | 76.5 |
a-label-attention-model-for-icd-coding-from | 92.1 | 10.7 | 98.8 | 57.5 | 59.0 | 73.5 |
an-unsupervised-approach-to-achieve | - | 24.7 | - | 60.0 | - | - |
effective-convolutional-attention-network-for | 91.5 | 10.6 | 98.8 | 58.9 | 60.6 | 75.8 |