HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Linguistic Acceptability
Linguistic Acceptability On Cola
Linguistic Acceptability On Cola
Metrics
Accuracy
MCC
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
MCC
Paper Title
Repository
BERT+TDA
88.2%
0.726
Can BERT eat RuCoLA? Topological Data Analysis to Explain
RoBERTa (ensemble)
67.8%
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-Base
51.1%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
LTG-BERT-base 98M
82.7
-
Not all layers are equally as important: Every Layer Counts BERT
-
En-BERT + TDA
82.1%
0.565
Acceptability Judgements via Examining the Topology of Attention Maps
RemBERT
-
0.6
RuCoLA: Russian Corpus of Linguistic Acceptability
24hBERT
57.1
-
How to Train BERT with an Academic Budget
MLM+ del-span+ reorder
64.3%
-
CLEAR: Contrastive Learning for Sentence Representation
-
ELECTRA
68.2%
-
-
-
ERNIE 2.0 Large
63.5%
-
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
deberta-v3-base+tasksource
87.15%
-
tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation
SqueezeBERT
46.5%
-
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
T5-XL 3B
67.1%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
FLOATER-large
69%
-
Learning to Encode Position for Transformer with Continuous Dynamical Model
LM-CPPF RoBERTa-base
14.1%
-
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
StructBERTRoBERTa ensemble
69.2%
-
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
data2vec
60.3%
-
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
ERNIE
52.3%
-
ERNIE: Enhanced Language Representation with Informative Entities
Q8BERT (Zafrir et al., 2019)
65.0
-
Q8BERT: Quantized 8Bit BERT
T5-Small
41.0%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
0 of 43 row(s) selected.
Previous
Next