Text Classification On Dbpedia
Métriques
Error
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Error |
---|---|
universal-language-model-fine-tuning-for-text | 0.80 |
bag-of-tricks-for-efficient-text | 1.4 |
character-level-convolutional-networks-for | 1.55 |
abstractive-text-classification-using | 2.77 |
very-deep-convolutional-networks-for-text | 1.29 |
sampling-bias-in-deep-active-classification | 0.8 |
explicit-interaction-model-towards-text | 1 |
xlnet-generalized-autoregressive-pretraining | 0.62 |
bert-pre-training-of-deep-bidirectional | 0.64 |
deep-pyramid-convolutional-neural-networks | 0.88 |
revisiting-lstm-networks-for-semi-supervised-1 | 0.7 |
disconnected-recurrent-neural-networks-for | 0.81 |
unsupervised-data-augmentation-1 | 0.68 |
on-tree-based-neural-sentence-modeling | 1.2 |
how-to-fine-tune-bert-for-text-classification | 0.68 |
compositional-coding-capsule-network-with-k | 1.28 |
unsupervised-data-augmentation-1 | 1.09 |
supervised-and-semi-supervised-text | 0.84 |
learning-context-sensitive-convolutional | 1.07 |
baseline-needs-more-love-on-simple-word | 1.43 |
joint-embedding-of-words-and-labels-for-text | 0.98 |