HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Wortsinnesdiskambiguierung
Word Sense Disambiguation On Words In Context
Word Sense Disambiguation On Words In Context
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
COSINE + Transductive Learning
85.3
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach
PaLM 540B (finetuned)
78.8
PaLM: Scaling Language Modeling with Pathways
ST-MoE-32B 269B (fine-tuned)
77.7
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-Ensemble
77.5
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Vega v2 6B (fine-tuned)
77.4
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
UL2 20B (fine-tuned)
77.3
UL2: Unifying Language Learning Paradigms
Turing NLR v5 XXL 5.4B (fine-tuned)
77.1
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
T5-XXL 11B
76.9
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTa-1.5B
76.4
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ST-MoE-L 4.1B (fine-tuned)
74
ST-MoE: Designing Stable and Transferable Sparse Expert Models
SenseBERT-large 340M
72.1
SenseBERT: Driving Some Sense into BERT
SenseBERT-base 110M
70.3
SenseBERT: Driving Some Sense into BERT
PaLM 2-L (one-shot)
66.8
PaLM 2 Technical Report
BERT-large 340M
65.5
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
FLAN-T5-Large 783M
64.7
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
LaMini-F-T5 783M
63.8
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Context2vec
59.3
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
DeConf
58.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
SW2V
58.1
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
ElMo
57.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
0 of 37 row(s) selected.
Previous
Next
Word Sense Disambiguation On Words In Context | SOTA | HyperAI