HyperAI

Natural Language Inference On Lidirus

Metrics

MCC

Results

Performance results of various models on this benchmark

Comparison Table
Model NameMCC
Model 10.218
Model 20.209
Model 30.01
Model 4-0.013
Model 50.178
unreasonable-effectiveness-of-rule-based0
Model 70.096
unreasonable-effectiveness-of-rule-based0
mt5-a-massively-multilingual-pre-trained-text0.061
Model 100.231
Model 110.224
Model 120.124
Model 130.189
Model 140
unreasonable-effectiveness-of-rule-based0.147
Model 160.32
Model 170.267
Model 180.235
Model 190.191
Model 200.339
russiansuperglue-a-russian-language0.626
russiansuperglue-a-russian-language0.06