Multi Label Text Classification On Reuters 1
Metrics
Micro-F1
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Micro-F1 |
---|---|
balancing-methods-for-multi-label-text | 90.62 |
balancing-methods-for-multi-label-text | 90.70 |
vector-of-locally-aggregated-word-embeddings | 89.3 |
co-attention-network-with-label-embedding-for | 89.9 |
balancing-methods-for-multi-label-text | 90.74 |
magnet-multi-label-text-classification-using | 89.9 |