Natural Language Inference On Lidirus
Metriken
MCC
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | MCC | Paper Title | Repository |
---|---|---|---|
SBERT_Large_mt_ru_finetuning | 0.218 | - | - |
SBERT_Large | 0.209 | - | - |
RuGPT3Medium | 0.01 | - | - |
RuGPT3Small | -0.013 | - | - |
RuBERT conversational | 0.178 | - | - |
majority_class | 0 | Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | - |
RuGPT3XL few-shot | 0.096 | - | - |
Random weighted | 0 | Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | - |
MT5 Large | 0.061 | mT5: A massively multilingual pre-trained text-to-text transformer | |
RuGPT3Large | 0.231 | - | - |
ruBert-base finetune | 0.224 | - | - |
YaLM 1.0B few-shot | 0.124 | - | - |
Multilingual Bert | 0.189 | - | - |
Golden Transformer | 0 | - | - |
heuristic majority | 0.147 | Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | - |
ruT5-large-finetune | 0.32 | - | - |
ruT5-base-finetune | 0.267 | - | - |
ruBert-large finetune | 0.235 | - | - |
RuBERT plain | 0.191 | - | - |
ruRoberta-large finetune | 0.339 | - | - |
0 of 22 row(s) selected.