Linguistic Acceptability On Cola
Métriques
Accuracy
MCC
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Accuracy | MCC |
---|---|---|
can-bert-eat-rucola-topological-data-analysis | 88.2% | 0.726 |
roberta-a-robustly-optimized-bert-pretraining | 67.8% | - |
exploring-the-limits-of-transfer-learning | 51.1% | - |
not-all-layers-are-equally-as-important-every | 82.7 | - |
acceptability-judgements-via-examining-the | 82.1% | 0.565 |
rucola-russian-corpus-of-linguistic | - | 0.6 |
how-to-train-bert-with-an-academic-budget | 57.1 | - |
clear-contrastive-learning-for-sentence | 64.3% | - |
Modèle 9 | 68.2% | - |
ernie-20-a-continual-pre-training-framework | 63.5% | - |
texttt-tasksource-structured-dataset | 87.15% | - |
squeezebert-what-can-computer-vision-teach | 46.5% | - |
exploring-the-limits-of-transfer-learning | 67.1% | - |
learning-to-encode-position-for-transformer | 69% | - |
lm-cppf-paraphrasing-guided-data-augmentation | 14.1% | - |
structbert-incorporating-language-structures | 69.2% | - |
data2vec-a-general-framework-for-self-1 | 60.3% | - |
ernie-enhanced-language-representation-with | 52.3% | - |
q8bert-quantized-8bit-bert | 65.0 | - |
exploring-the-limits-of-transfer-learning | 41.0% | - |
xlnet-generalized-autoregressive-pretraining | 69% | - |
not-all-layers-are-equally-as-important-every | 82.6 | - |
albert-a-lite-bert-for-self-supervised | 69.1% | - |
informer-transformer-likes-informed-attention | 59.83% | - |
a-statistical-framework-for-low-bitwidth | 67.5 | - |
can-bert-eat-rucola-topological-data-analysis | 87.3% | 0.695 |
llm-int8-8-bit-matrix-multiplication-for | 68.6% | - |
bert-pre-training-of-deep-bidirectional | 60.5% | - |
distilbert-a-distilled-version-of-bert | 49.1% | - |
spanbert-improving-pre-training-by | 64.3% | - |
acceptability-judgements-via-examining-the | 88.6% | - |
big-bird-transformers-for-longer-sequences | 58.5% | - |
ernie-20-a-continual-pre-training-framework | 55.2% | - |
190910351 | 43.3% | - |
not-all-layers-are-equally-as-important-every | 77.6 | - |
entailment-as-few-shot-learner | 86.4% | - |
charformer-fast-character-transformers-via | 51.8% | - |
exploring-the-limits-of-transfer-learning | 61.2% | - |
not-all-layers-are-equally-as-important-every | 76.1 | - |
exploring-the-limits-of-transfer-learning | 70.8% | - |
fnet-mixing-tokens-with-fourier-transforms | 78% | - |
multi-task-deep-neural-networks-for-natural | 68.4% | - |
q-bert-hessian-based-ultra-low-precision | 65.1 | - |