Linguistic Acceptability On Cola

评估指标

Accuracy
MCC

评测结果

各个模型在此基准测试上的表现结果

模型名称
Accuracy
MCC
Paper TitleRepository
BERT+TDA88.2%0.726Can BERT eat RuCoLA? Topological Data Analysis to Explain-
RoBERTa (ensemble)67.8%-RoBERTa: A Robustly Optimized BERT Pretraining Approach-
T5-Base51.1%-Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer-
LTG-BERT-base 98M82.7-Not all layers are equally as important: Every Layer Counts BERT-
En-BERT + TDA82.1%0.565Acceptability Judgements via Examining the Topology of Attention Maps-
RemBERT-0.6RuCoLA: Russian Corpus of Linguistic Acceptability-
24hBERT57.1-How to Train BERT with an Academic Budget-
MLM+ del-span+ reorder64.3%-CLEAR: Contrastive Learning for Sentence Representation-
ELECTRA68.2%---
ERNIE 2.0 Large63.5%-ERNIE 2.0: A Continual Pre-training Framework for Language Understanding-
deberta-v3-base+tasksource87.15%-tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation-
SqueezeBERT46.5%-SqueezeBERT: What can computer vision teach NLP about efficient neural networks?-
T5-XL 3B67.1%-Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer-
FLOATER-large69%-Learning to Encode Position for Transformer with Continuous Dynamical Model-
LM-CPPF RoBERTa-base14.1%-LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning-
StructBERTRoBERTa ensemble69.2%-StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding-
data2vec60.3%-data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language-
ERNIE52.3%-ERNIE: Enhanced Language Representation with Informative Entities-
Q8BERT (Zafrir et al., 2019)65.0-Q8BERT: Quantized 8Bit BERT-
T5-Small41.0%-Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer-
0 of 43 row(s) selected.
Linguistic Acceptability On Cola | SOTA | HyperAI超神经