HyperAIHyperAI

Command Palette

Search for a command to run...

a month ago

When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance

Nicolas Boizard Hippolyte Gisserot-Boukhlef Kevin El-Haddad Céline Hudelot Pierre Colombo

When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance

Abstract

Large Language Models (LLMs) with reasoning capabilities have achieved state-of-the-art performance on a wide range of tasks. Despite its empirical success, the tasks and model scales at which reasoning becomes effective, as well as its training and inference costs, remain underexplored. In this work, we rely on a synthetic data distillation framework to conduct a large-scale supervised study. We compare Instruction Fine-Tuning (IFT) and reasoning models of varying sizes, on a wide range of math-centric and general-purpose tasks, evaluating both multiple-choice and open-ended formats. Our analysis reveals that reasoning consistently improves model performance, often matching or surpassing significantly larger IFT systems. Notably, while IFT remains Pareto-optimal in training and inference costs, reasoning models become increasingly valuable as model size scales, overcoming IFT performance limits on reasoning-intensive and open-ended tasks.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance | Papers | HyperAI