Sentiment Analysis On Yelp Fine Grained
Metriken
Error
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Error |
---|---|
universal-language-model-fine-tuning-for-text | 29.98 |
how-to-fine-tune-bert-for-text-classification | 29.42 |
learning-to-remember-more-with-less | 34.40 |
how-to-fine-tune-bert-for-text-classification | 28.62 |
xlnet-generalized-autoregressive-pretraining | 27.05 |
squeezed-very-deep-convolutional-neural | 46.80 |
enhancing-sentence-embedding-with-generalized | 33.45 |
compositional-coding-capsule-network-with-k | 34.15 |
character-level-convolutional-networks-for | 37.95 |
baseline-needs-more-love-on-simple-word | 36.21 |
disconnected-recurrent-neural-networks-for | 30.85 |
unsupervised-data-augmentation-1 | 32.08 |
unsupervised-data-augmentation-1 | 29.32 |
joint-embedding-of-words-and-labels-for-text | 35.91 |
supervised-and-semi-supervised-text | 32.39 |
bag-of-tricks-for-efficient-text | 36.1 |
deep-pyramid-convolutional-neural-networks | 30.58 |