Data Free Knowledge Distillation On Squad
Métriques
Exact Match
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Exact Match | Paper Title | Repository |
---|---|---|---|
GOLD (T5-base) | 75.2 | GOLD: Generalized Knowledge Distillation via Out-of-Distribution-Guided Language Data Generation | |
Prompt2Model (T5-base) | 74.4 | Prompt2Model: Generating Deployable Models from Natural Language Instructions | |
ProGen (T5-base) | 68.1 | ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback | - |
ZeroGen (T5-base) | 69.4 | ZeroGen: Efficient Zero-shot Learning via Dataset Generation |
0 of 4 row(s) selected.