Sentiment Analysis On Sst 2 Binary
المقاييس
Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | Accuracy |
---|---|
on-the-role-of-text-preprocessing-in-neural | 91.2 |
spanbert-improving-pre-training-by | 94.8 |
recursive-deep-models-for-semantic | 82.9 |
pay-attention-to-mlps | 94.8 |
a-c-lstm-neural-network-for-text | 87.8 |
adversarial-self-attention-for-language | 94.1 |
emo2vec-learning-generalized-emotion | 81.2 |
adversarial-self-attention-for-language | 96.3 |
distilling-task-specific-knowledge-from-bert | 90.7 |
smart-robust-and-efficient-fine-tuning-for | 93 |
universal-sentence-encoder | 87.21 |
message-passing-attention-networks-for | 87.75 |
convolutional-neural-networks-for-sentence | 88.1 |
exploring-the-limits-of-transfer-learning | 97.5 |
how-to-train-bert-with-an-academic-budget | 93.0 |
informer-transformer-likes-informed-attention | 94.04 |
exploring-the-limits-of-transfer-learning | 97.4 |
exploring-the-limits-of-transfer-learning | 96.3 |
improved-semantic-representations-from-tree | 86.3 |
text-classification-improved-by-integrating | 89.5 |
electra-pre-training-text-encoders-as-1 | 96.9 |
baseline-needs-more-love-on-simple-word | 84.3 |
190600095 | 86.95 |
training-complex-models-with-multi-task-weak | 96.2 |
a-helping-hand-transfer-learning-for-deep | 86.99 |
a-la-carte-embedding-cheap-but-effective | 91.7 |
exploring-joint-neural-model-for-sentence | 54.72 |
q8bert-quantized-8bit-bert | 94.7 |
deberta-decoding-enhanced-bert-with | 96.5 |
entailment-as-few-shot-learner | 96.9 |
squeezebert-what-can-computer-vision-teach | 91.4 |
convolutional-neural-networks-with-recurrent | 90.0 |
investigating-capsule-networks-with-dynamic | 86.8 |
dual-contrastive-learning-text-classification | 94.91 |
charformer-fast-character-transformers-via | 91.6 |
neural-semantic-encoders | 89.7 |
exploring-the-limits-of-transfer-learning | 91.8 |
improved-semantic-representations-from-tree | 88.0 |
multi-task-deep-neural-networks-for-natural | 95.6 |
smart-robust-and-efficient-fine-tuning-for | 93.6 |
big-bird-transformers-for-longer-sequences | 94.6 |
q-bert-hessian-based-ultra-low-precision | 94.8 |
clear-contrastive-learning-for-sentence | 94.5 |
fnet-mixing-tokens-with-fourier-transforms | 94 |
cell-aware-stacked-lstms-for-modeling | 91.3 |
structbert-incorporating-language-structures | 97.1 |
gpu-kernels-for-block-sparse-weights | 93.2 |
learned-in-translation-contextualized-word | 90.3 |
xlnet-generalized-autoregressive-pretraining | 96.8 |
emo2vec-learning-generalized-emotion | 82.3 |
learning-to-encode-position-for-transformer | 96.7 |
lm-cppf-paraphrasing-guided-data-augmentation | 93.2 |
learning-to-generate-reviews-and-discovering | 91.8 |
information-aggregation-via-dynamic-routing | 87.2 |
190910351 | 93.1 |
task-oriented-word-embedding-for-text | 78.8 |
recursive-deep-models-for-semantic | 85.4 |
distilbert-a-distilled-version-of-bert | 91.3 |
smart-robust-and-efficient-fine-tuning-for | - |
fine-grained-sentiment-classification-using | 91.2 |
information-aggregation-via-dynamic-routing | 87.6 |
an-algorithm-for-routing-vectors-in-sequences | 96.0 |
practical-text-classification-with-large-pre | 90.9 |
exploring-the-limits-of-transfer-learning | 95.2 |
bert-pre-training-of-deep-bidirectional | 94.9 |
cloze-driven-pretraining-of-self-attention | 94.6 |
harnessing-deep-neural-networks-with-logic | 89.3 |
smart-robust-and-efficient-fine-tuning-for | - |
190910351 | 92.6 |
albert-a-lite-bert-for-self-supervised | 97.1 |
muppet-massive-multi-task-representations | 96.7 |
ernie-enhanced-language-representation-with | 93.5 |
xlnet-generalized-autoregressive-pretraining | 97 |
improving-multi-task-deep-neural-networks-via | 96.5 |
an-algorithm-for-routing-capsules-in-all | 95.6 |
roberta-a-robustly-optimized-bert-pretraining | 96.7 |
fine-grained-sentiment-classification-using | 93.1 |
ask-me-anything-dynamic-memory-networks-for | 88.6 |
muppet-massive-multi-task-representations | 97.4 |
smart-robust-and-efficient-fine-tuning-for | 97.5 |
llm-int8-8-bit-matrix-multiplication-for | 96.4 |
subregweigh-effective-and-efficient | 94.84 |
a-statistical-framework-for-low-bitwidth | 96.2 |
nystromformer-a-nystrom-based-algorithm-for | 91.4 |
ernie-20-a-continual-pre-training-framework | 95 |
smart-robust-and-efficient-fine-tuning-for | - |
pay-attention-when-required | 91.6 |
improved-sentence-modeling-using-suffix | 91.2 |