HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Subjectivity Analysis
Subjectivity Analysis On Subj
Subjectivity Analysis On Subj
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
Repository
GRU-RNN-GLOVE
91.85
All-but-the-Top: Simple and Effective Postprocessing for Word Representations
RoBERTa-large 355M + Entailment as Few-shot Learner
97.1
Entailment as Few-Shot Learner
VLAWE
95.0
Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level Representation
BERT-Base + LSTM
96.60
An Empirical Evaluation of Word Embedding Models for Subjectivity Analysis Tasks
STM+TSED+PT+2L
92.34
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning
Capsule-B
93.8
Investigating Capsule Networks with Dynamic Routing for Text Classification
USE
93.90
Universal Sentence Encoder
Fast Dropout
93.60
-
-
byte mLSTM7
94.7
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
RoBERTa+DualCL
97.34
Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation
SWEM-concat
93
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Byte mLSTM
94.60
Learning to Generate Reviews and Discovering Sentiment
AdaSent
95.50
Self-Adaptive Hierarchical Sentence Model
CNN+MCFA
94.80
Translations as Additional Contexts for Sentence Classification
SDAE
90.8
Learning Distributed Representations of Sentences from Unlabelled Data
BERT-Base + CLR + LSTM
97.30
An Empirical Evaluation of Word Embedding Models for Subjectivity Analysis Tasks
0 of 16 row(s) selected.
Previous
Next