HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Sentimentanalyse
Sentiment Analysis On Imdb
Sentiment Analysis On Imdb
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
RoBERTa-large with LlamBERT
96.68
LlamBERT: Large-scale low-cost data annotation in NLP
RoBERTa-large
96.54
LlamBERT: Large-scale low-cost data annotation in NLP
XLNet
96.21
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Heinsen Routing + RoBERTa Large
96.2
An Algorithm for Routing Vectors in Sequences
RoBERTa-large 355M + Entailment as Few-shot Learner
96.1
Entailment as Few-Shot Learner
GraphStar
96.0
Graph Star Net for Generalized Multi-Task Learning
DV-ngrams-cosine with NB sub-sampling + RoBERTa.base
95.94
The Document Vectors Using Cosine Similarity Revisited
DV-ngrams-cosine + RoBERTa.base
95.92
The Document Vectors Using Cosine Similarity Revisited
BERT large finetune UDA
95.8
Unsupervised Data Augmentation for Consistency Training
RoBERTa.base
95.79
The Document Vectors Using Cosine Similarity Revisited
BERT_large+ITPT
95.79
How to Fine-Tune BERT for Text Classification?
L MIXED
95.68
Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function
BERT_base+ITPT
95.63
How to Fine-Tune BERT for Text Classification?
BERT large
95.49
Unsupervised Data Augmentation for Consistency Training
ULMFiT
95.4
Universal Language Model Fine-tuning for Text Classification
Llama-2-70b-chat (0-shot)
95.39
LlamBERT: Large-scale low-cost data annotation in NLP
FLAN 137B (few-shot, k=2)
95
Finetuned Language Models Are Zero-Shot Learners
Block-sparse LSTM
94.99
GPU Kernels for Block-Sparse Weights
Space-XLNet
94.88
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs
CEN-tpc
94.52
Contextual Explanation Networks
0 of 48 row(s) selected.
Previous
Next
Sentiment Analysis On Imdb | SOTA | HyperAI