Command Palette
Search for a command to run...
Data Free Knowledge Distillation On Qnli
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
| Paper Title | ||
|---|---|---|
| GOLD (T5-base) | 91.7 | GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation |
| ZeroGen (T5-base) | 88.5 | ZeroGen: Efficient Zero-shot Learning via Dataset Generation |
| ProGen (T5-base) | 85.9 | ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback |
| Prompt2Model (T5-base) | 62.2 | Prompt2Model: Generating Deployable Models from Natural Language Instructions |
0 of 4 row(s) selected.