HyperAI

It seems like you've provided a term, "Antidistillation Sampling," rather than a full news article or academic report. Could you please provide the complete text you would like translated? This will help me give you an accurate and contextually appropriate translation.

Yash Savani, Asher Trockman, Zhili Feng, Avi Schwarzschild, Alexander Robey, Marc Finzi, J. Zico Kolter
تاريخ النشر: 4/18/2025
It seems like you've provided a term, "Antidistillation Sampling," rather than a full news article or academic report. Could you please provide the complete text you would like translated? This will help me give you an accurate and contextually appropriate translation.
الملخص

Frontier models that generate extended reasoning traces inadvertently produce rich token sequences that can facilitate model distillation. Recognizing this vulnerability, model owners may seek sampling strategies that limit the effectiveness of distillation without compromising model performance. Antidistillation sampling provides exactly this capability. By strategically modifying a model's next-token probability distribution, antidistillation sampling poisons reasoning traces, rendering them significantly less effective for distillation while preserving the model's practical utility. For further details, see https://antidistillation.com.