Data Free Knowledge Distillation On Qnli
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Accuracy | Paper Title | Repository |
---|---|---|---|
ProGen (T5-base) | 85.9 | ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback | - |
Prompt2Model (T5-base) | 62.2 | Prompt2Model: Generating Deployable Models from Natural Language Instructions | |
ZeroGen (T5-base) | 88.5 | ZeroGen: Efficient Zero-shot Learning via Dataset Generation | |
GOLD (T5-base) | 91.7 | GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation |
0 of 4 row(s) selected.