HyperAI

Natural Language Inference On Multinli Dev

Metrics

Matched
Mismatched

Results

Performance results of various models on this benchmark

Comparison Table
Model NameMatchedMismatched
prune-once-for-all-sparse-pre-trained78.880.4
prune-once-for-all-sparse-pre-trained81.482.51
prune-once-for-all-sparse-pre-trained82.7183.67
prune-once-for-all-sparse-pre-trained80.6881.47
prune-once-for-all-sparse-pre-trained80.6681.14
prune-once-for-all-sparse-pre-trained83.7484.2
prune-once-for-all-sparse-pre-trained81.4582.43
19091035184.584.5
prune-once-for-all-sparse-pre-trained83.4784.08
prune-once-for-all-sparse-pre-trained81.3582.03