Sentiment Analysis On Tweeteval
Metrics
ALL
Emoji
Emotion
Hate
Irony
Offensive
Sentiment
Stance
Results
Performance results of various models on this benchmark
Model Name | ALL | Emoji | Emotion | Hate | Irony | Offensive | Sentiment | Stance | Paper Title | Repository |
---|---|---|---|---|---|---|---|---|---|---|
FastText | 58.1 | 25.8 | 65.2 | 50.6 | 63.1 | 73.4 | 62.9 | 65.4 | TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification | |
RoBERTa-Base | 61.3 | 30.9 | 76.1 | 46.6 | 59.7 | 79.5 | 71.3 | 68 | TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification | |
SVM | 53.5 | 29.3 | 64.7 | 36.7 | 61.7 | 52.3 | 62.9 | 67.3 | TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification | |
RoBERTa-Twitter | 61.0 | 29.3 | 72.0 | 49.9 | 65.4 | 77.1 | 69.1 | 66.7 | TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification | |
LSTM | 56.5 | 24.7 | 66.0 | 52.6 | 62.8 | 71.7 | 58.3 | 59.4 | TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification | |
RoB-RT | 65.2 | 31.4 | 79.5 | 52.3 | 61.7 | 80.5 | 72.6 | 69.3 | XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond | |
BERTweet | 67.9 | 33.4 | 79.3 | - | 82.1 | 79.5 | 73.4 | 71.2 | BERTweet: A pre-trained language model for English Tweets |
0 of 7 row(s) selected.