Data Free Knowledge Distillation On Qnli
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
模型名称 | Accuracy | Paper Title | Repository |
---|---|---|---|
ProGen (T5-base) | 85.9 | ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback | - |
Prompt2Model (T5-base) | 62.2 | Prompt2Model: Generating Deployable Models from Natural Language Instructions | |
ZeroGen (T5-base) | 88.5 | ZeroGen: Efficient Zero-shot Learning via Dataset Generation | |
GOLD (T5-base) | 91.7 | GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation |
0 of 4 row(s) selected.