HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Linguistic Acceptability
Linguistic Acceptability On Cola
Linguistic Acceptability On Cola
评估指标
Accuracy
MCC
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
MCC
Paper Title
Repository
BERT+TDA
88.2%
0.726
Can BERT eat RuCoLA? Topological Data Analysis to Explain
RoBERTa (ensemble)
67.8%
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-Base
51.1%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
LTG-BERT-base 98M
82.7
-
Not all layers are equally as important: Every Layer Counts BERT
-
En-BERT + TDA
82.1%
0.565
Acceptability Judgements via Examining the Topology of Attention Maps
RemBERT
-
0.6
RuCoLA: Russian Corpus of Linguistic Acceptability
24hBERT
57.1
-
How to Train BERT with an Academic Budget
MLM+ del-span+ reorder
64.3%
-
CLEAR: Contrastive Learning for Sentence Representation
-
ELECTRA
68.2%
-
-
-
ERNIE 2.0 Large
63.5%
-
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
deberta-v3-base+tasksource
87.15%
-
tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation
SqueezeBERT
46.5%
-
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
T5-XL 3B
67.1%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
FLOATER-large
69%
-
Learning to Encode Position for Transformer with Continuous Dynamical Model
LM-CPPF RoBERTa-base
14.1%
-
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning
StructBERTRoBERTa ensemble
69.2%
-
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
data2vec
60.3%
-
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
ERNIE
52.3%
-
ERNIE: Enhanced Language Representation with Informative Entities
Q8BERT (Zafrir et al., 2019)
65.0
-
Q8BERT: Quantized 8Bit BERT
T5-Small
41.0%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
0 of 43 row(s) selected.
Previous
Next