HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Kernreferenzauflösung
Coreference Resolution On Winograd Schema
Coreference Resolution On Winograd Schema
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
PaLM 540B (fine-tuned)
100
PaLM: Scaling Language Modeling with Pathways
Vega v2 6B (KD-based prompt transfer)
98.6
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
UL2 20B (fine-tuned)
98.1
UL2: Unifying Language Learning Paradigms
Turing NLR v5 XXL 5.4B (fine-tuned)
97.3
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
ST-MoE-32B 269B (fine-tuned)
96.6
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-1.5B
95.9
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
T5-XXL 11B (fine-tuned)
93.8
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ST-MoE-L 4.1B (fine-tuned)
93.3
ST-MoE: Designing Stable and Transferable Sparse Expert Models
RoBERTa-WinoGrande 355M
90.1
WinoGrande: An Adversarial Winograd Schema Challenge at Scale
Flan-T5 XXL (zero -shot)
89.82
Scaling Instruction-Finetuned Language Models
PaLM 540B (5-shot)
89.5
PaLM: Scaling Language Modeling with Pathways
PaLM 540B (0-shot)
89.1
PaLM: Scaling Language Modeling with Pathways
PaLM 2-M (1-shot)
88.1
PaLM 2 Technical Report
PaLM 2-L (1-shot)
86.9
PaLM 2 Technical Report
FLAN 137B (prompt-tuned)
86.5
Finetuned Language Models Are Zero-Shot Learners
PaLM 540B (1-shot)
86.3
PaLM: Scaling Language Modeling with Pathways
PaLM 2-S (1-shot)
84.6
PaLM 2 Technical Report
TTTTT 3B (fine-tuned)
84.6
TTTTTackling WinoGrande Schemas
RoBERTa-DPR 355M
83.1
WinoGrande: An Adversarial Winograd Schema Challenge at Scale
FLAN 137B (zero-shot)
80.8
Finetuned Language Models Are Zero-Shot Learners
0 of 82 row(s) selected.
Previous
Next
Coreference Resolution On Winograd Schema | SOTA | HyperAI