Data Free Knowledge Distillation On Qnli
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy | Paper Title | Repository |
---|---|---|---|
ProGen (T5-base) | 85.9 | ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback | - |
Prompt2Model (T5-base) | 62.2 | Prompt2Model: Generating Deployable Models from Natural Language Instructions | |
ZeroGen (T5-base) | 88.5 | ZeroGen: Efficient Zero-shot Learning via Dataset Generation | |
GOLD (T5-base) | 91.7 | GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation |
0 of 4 row(s) selected.