HyperAI
HyperAI
Startseite
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Seite durchsuchen…
⌘
K
Startseite
SOTA
Fragebeantwortung
Question Answering On Quora Question Pairs
Question Answering On Quora Question Pairs
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
Repository
T5-11B
90.4%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
24hBERT
70.7
How to Train BERT with an Academic Budget
-
MLM+ subs+ del-span
90.3%
CLEAR: Contrastive Learning for Sentence Representation
-
ELECTRA
90.1%
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
-
RoBERTa (ensemble)
90.2%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
-
T5-Small
88.0%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
ERNIE 2.0 Large
90.1%
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
-
BigBird
88.6%
Big Bird: Transformers for Longer Sequences
-
T5-Base
89.4%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
RE2
89.2 %
Simple and Effective Text Matching with Richer Alignment Features
-
SqueezeBERT
80.3%
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
-
DeBERTa (large)
92.3%
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
-
ALBERT
90.5%
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
-
XLNet (single model)
92.3%
XLNet: Generalized Autoregressive Pretraining for Language Understanding
-
SWEM-concat
83.03%
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
-
T5-3B
89.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
T5-Large 770M
89.9%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
DistilBERT 66M
89.2%
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
-
ERNIE 2.0 Base
89.8%
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
-
0 of 19 row(s) selected.
Previous
Next