Multi Label Condescension Detection On Dpm
Metrics
Macro-F1
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Macro-F1 |
---|---|
pali-nlp-at-semeval-2022-task-4 | 43.28 |
aliedalat-at-semeval-2022-task-4-patronizing | 31.6 |
beike-nlp-at-semeval-2022-task-4-prompt-based-1 | 44.4 |
dh-fbk-at-semeval-2022-task-4-leveraging | 37.35 |
semeval-2022-task-4-patronizing-and | 10.4 |