HyperAI

Sentiment Analysis On Sst 2 Binary

Metrics

Accuracy

Results

Performance results of various models on this benchmark

Comparison Table
Model NameAccuracy
on-the-role-of-text-preprocessing-in-neural91.2
spanbert-improving-pre-training-by94.8
recursive-deep-models-for-semantic82.9
pay-attention-to-mlps94.8
a-c-lstm-neural-network-for-text87.8
adversarial-self-attention-for-language94.1
emo2vec-learning-generalized-emotion81.2
adversarial-self-attention-for-language96.3
distilling-task-specific-knowledge-from-bert90.7
smart-robust-and-efficient-fine-tuning-for93
universal-sentence-encoder87.21
message-passing-attention-networks-for87.75
convolutional-neural-networks-for-sentence88.1
exploring-the-limits-of-transfer-learning97.5
how-to-train-bert-with-an-academic-budget93.0
informer-transformer-likes-informed-attention94.04
exploring-the-limits-of-transfer-learning97.4
exploring-the-limits-of-transfer-learning96.3
improved-semantic-representations-from-tree86.3
text-classification-improved-by-integrating89.5
electra-pre-training-text-encoders-as-196.9
baseline-needs-more-love-on-simple-word84.3
19060009586.95
training-complex-models-with-multi-task-weak96.2
a-helping-hand-transfer-learning-for-deep86.99
a-la-carte-embedding-cheap-but-effective91.7
exploring-joint-neural-model-for-sentence 54.72
q8bert-quantized-8bit-bert94.7
deberta-decoding-enhanced-bert-with96.5
entailment-as-few-shot-learner96.9
squeezebert-what-can-computer-vision-teach91.4
convolutional-neural-networks-with-recurrent90.0
investigating-capsule-networks-with-dynamic86.8
dual-contrastive-learning-text-classification94.91
charformer-fast-character-transformers-via91.6
neural-semantic-encoders89.7
exploring-the-limits-of-transfer-learning91.8
improved-semantic-representations-from-tree88.0
multi-task-deep-neural-networks-for-natural95.6
smart-robust-and-efficient-fine-tuning-for93.6
big-bird-transformers-for-longer-sequences94.6
q-bert-hessian-based-ultra-low-precision94.8
clear-contrastive-learning-for-sentence94.5
fnet-mixing-tokens-with-fourier-transforms94
cell-aware-stacked-lstms-for-modeling91.3
structbert-incorporating-language-structures97.1
gpu-kernels-for-block-sparse-weights93.2
learned-in-translation-contextualized-word90.3
xlnet-generalized-autoregressive-pretraining96.8
emo2vec-learning-generalized-emotion82.3
learning-to-encode-position-for-transformer96.7
lm-cppf-paraphrasing-guided-data-augmentation93.2
learning-to-generate-reviews-and-discovering91.8
information-aggregation-via-dynamic-routing87.2
19091035193.1
task-oriented-word-embedding-for-text78.8
recursive-deep-models-for-semantic85.4
distilbert-a-distilled-version-of-bert91.3
smart-robust-and-efficient-fine-tuning-for-
fine-grained-sentiment-classification-using91.2
information-aggregation-via-dynamic-routing87.6
an-algorithm-for-routing-vectors-in-sequences96.0
practical-text-classification-with-large-pre90.9
exploring-the-limits-of-transfer-learning95.2
bert-pre-training-of-deep-bidirectional94.9
cloze-driven-pretraining-of-self-attention94.6
harnessing-deep-neural-networks-with-logic89.3
smart-robust-and-efficient-fine-tuning-for-
19091035192.6
albert-a-lite-bert-for-self-supervised97.1
muppet-massive-multi-task-representations96.7
ernie-enhanced-language-representation-with93.5
xlnet-generalized-autoregressive-pretraining97
improving-multi-task-deep-neural-networks-via96.5
an-algorithm-for-routing-capsules-in-all95.6
roberta-a-robustly-optimized-bert-pretraining96.7
fine-grained-sentiment-classification-using93.1
ask-me-anything-dynamic-memory-networks-for88.6
muppet-massive-multi-task-representations97.4
smart-robust-and-efficient-fine-tuning-for97.5
llm-int8-8-bit-matrix-multiplication-for96.4
subregweigh-effective-and-efficient94.84
a-statistical-framework-for-low-bitwidth96.2
nystromformer-a-nystrom-based-algorithm-for91.4
ernie-20-a-continual-pre-training-framework95
smart-robust-and-efficient-fine-tuning-for-
pay-attention-when-required91.6
improved-sentence-modeling-using-suffix91.2