HyperAIHyperAI

Command Palette

Search for a command to run...

Les personas de LLM comme substitution aux expériences sur le terrain dans le benchmarking méthodologique

Enoch Hyunwook Kang

Abstract

Les expérimentations sur le terrain (tests A/B) constituent souvent la référence la plus crédible pour évaluer les méthodes dans les systèmes sociaux, mais leur coût élevé et leur latence limitent fortement le développement itératif des méthodes. La simulation de personnages à l’aide de modèles linguistiques pré-entraînés (LLM) offre une alternative synthétique peu coûteuse, mais il n’est pas clair que remplacer les humains par des personnages préserve l’interface de référence contre laquelle les méthodes adaptatives sont optimisées. Nous établissons une caractérisation si-et-seulement-si : lorsque (i) les méthodes n’observent que le résultat agrégé (observation uniquement agrégée) et (ii) l’évaluation dépend uniquement de l’artefact soumis, et non de l’identité ou de l’origine de l’algorithme (évaluation aveugle à l’algorithme), le remplacement des humains par des personnages revient à une simple modification du panel du point de vue de la méthode, indiscernable d’un changement de population d’évaluation (par exemple, de New York à Jakarta). En outre, nous passons de la validité à la pertinence pratique : nous définissons une discriminabilité informationnelle du canal agrégé induit, et montrons que rendre la benchmarking par personnages aussi pertinente pour la décision qu’une expérimentation sur le terrain est fondamentalement une question de taille d’échantillon, conduisant à des bornes explicites sur le nombre d’évaluations indépendantes de personnages nécessaires pour distinguer de manière fiable des méthodes significativement différentes à une résolution choisie.

One-sentence Summary

The authors propose that LLM-based persona simulation validly replaces human A/B testing under aggregate-only observation and algorithm-blind evaluation, proving it is indistinguishable from changing populations; they define information-theoretic discriminability to show sufficient persona samples make synthetic benchmarks as decision-relevant as field experiments for reliably distinguishing methods at desired resolution. (58 words)

Key Contributions

  • Field experiments (A/B tests) for societal systems are credible but costly and slow, creating a bottleneck for iterative development, while LLM-based persona simulations offer a cheap alternative whose validity as a drop-in benchmark substitute remains uncertain due to potential mismatches in the evaluation interface.
  • The paper proves that persona simulations become indistinguishable from a simple population panel change (e.g., New York to Jakarta) if and only if two conditions hold: methods observe only aggregate outcomes (aggregate-only observation) and evaluation depends solely on the submitted artifact, not the algorithm's origin (algorithm-blind evaluation).
  • It introduces an information-theoretic discriminability metric for the aggregate channel, showing that achieving decision-relevant persona benchmarking equivalent to field experiments requires sufficient independent persona evaluations, with explicit sample-size bounds derived to reliably distinguish meaningfully different methods at a specified resolution.

Introduction

Field experiments are the gold standard for benchmarking methods in societal systems like marketplace design or behavioral interventions, but their high cost and slow execution severely bottleneck iterative development. Prior attempts to use LLM-based persona simulations as cheaper alternatives face critical uncertainty: it remains unclear whether swapping humans for personas preserves the benchmark's core interface that methods optimize against, especially given evidence of confounding in causal applications where prompt manipulations inadvertently alter latent scenario aspects.

The authors prove that persona simulation becomes a theoretically valid drop-in substitute for field experiments if and only if two conditions hold: (i) methods observe only aggregate outcomes (not individual responses), and (ii) evaluation depends solely on the submitted artifact, not the algorithm's identity or provenance. Crucially, they extend this identification result to practical usefulness by defining an information-based measure of discriminability for the persona-induced evaluation channel. This yields explicit sample-size bounds—showing how many independent persona evaluations are required to reliably distinguish meaningful method differences at a target resolution—turning persona quality into a quantifiable budget question.

Method

The authors leverage a formal framework to model algorithm benchmarking as an interactive learning process, where an algorithm iteratively selects method configurations and receives feedback from an evaluator. This process is structured around three core components: the configuration space, the evaluation pipeline, and the feedback-driven adaptation mechanism.

At the heart of the method is the concept of a method configuration θΘ\theta \in \ThetaθΘ, which encapsulates all controllable degrees of freedom—such as model weights, prompts, hyperparameters, decoding rules, or data curation policies—that define a system or procedure. Deploying θ\thetaθ yields an artifact w(θ)Ww(\theta) \in \mathcal{W}w(θ)W, which is the object submitted to the benchmark for evaluation. The artifact space W\mathcal{W}W is flexible, accommodating single outputs, stochastic distributions, interaction policies, or agent rollouts, depending on the task.

The evaluation process is modeled as a two-stage pipeline: micro-level judgments are first elicited and then aggregated into a single feedback signal. This pipeline is fully specified by a tuple (P,I,Γ,L)(P, I, \Gamma, L)(P,I,Γ,L), where PPP is a distribution over evaluators (human or LLM personas), I(w,p)I(\cdot \mid w, p)I(w,p) is a micro-instrument that generates individual responses from an evaluator ppp given artifact www, Γ\GammaΓ is a deterministic aggregation function mapping LLL micro-responses to a single observable feedback oOo \in \mathcal{O}oO, and LLL is the panel size. The entire evaluation call induces a Markov kernel QP,I(w)Q_{P,I}(\cdot \mid w)QP,I(w) over O\mathcal{O}O, which represents the distribution of the aggregate feedback for artifact www.

The algorithm operates as an adaptive learner in a repeated “submit-observe” loop. At each round ttt, it selects a configuration θt\theta_tθt (or equivalently, artifact wtw_twt) based on a decision kernel πt(Ht1,S)\pi_t(\cdot \mid H_{t-1}, S)πt(Ht1,S), where Ht1H_{t-1}Ht1 is the observable history of past submissions and feedback, and SSS represents any side information available before benchmarking begins. The feedback oto_tot received at round ttt is drawn from QP,I(wt)Q_{P,I}(\cdot \mid w_t)QP,I(wt), and the algorithm updates its strategy accordingly.

Two benchmark hygiene conditions are critical to ensure the integrity of this interface. The first, Aggregate-only observation (AO), mandates that the algorithm observes only the aggregate feedback oto_tot and not any micro-level details such as panel identities or raw votes. The second, Algorithm-blind evaluation (AB), requires that the feedback distribution depends solely on the submitted artifact wtw_twt and not on the identity or provenance of the algorithm that produced it. Together, these conditions ensure that the evaluation behaves as a well-defined oracle channel, enabling the method to treat the benchmark as a stable environment.

Under these conditions, swapping human evaluators for LLM personas is equivalent to a “just panel change” (JPC) from the method’s perspective: the interaction structure remains unchanged, and the only difference is in the induced artifact-to-feedback kernel Q(w)Q(\cdot \mid w)Q(w). This equivalence is formalized through transcript laws that factorize into submission kernels and artifact-dependent feedback kernels, preserving the method’s information structure regardless of the evaluator type.

To assess the usefulness of such a benchmark—beyond its validity—the authors introduce the concept of discriminability κQ\kappa_QκQ, defined as the infimum of Kullback-Leibler divergence between feedback distributions of artifacts that differ by at least a resolution threshold rrr under a metric dWd_{\mathcal{W}}dW. Under a homoscedastic Gaussian assumption, this reduces to the worst-case pairwise signal-to-noise ratio (SNR), which is empirically estimable from repeated evaluations. The sample complexity for reliable pairwise comparisons scales inversely with κQ\kappa_QκQ, requiring approximately L2κQlog1δL \geq \frac{2}{\kappa_Q} \log \frac{1}{\delta}LκQ2logδ1 independent evaluations to achieve a misranking probability of at most δ\deltaδ.

The choice of dWd_{\mathcal{W}}dW and rrr is method-specific and should reflect the developer’s degrees of freedom and minimal meaningful iteration unit. For example, in prompt tuning, dWd_{\mathcal{W}}dW may be Levenshtein distance over instruction clauses, and r=1r=1r=1 corresponds to a single atomic edit. This operationalization allows practitioners to estimate κQ\kappa_QκQ from pilot runs and derive the required dataset size for stable method comparison.

In summary, the framework provides a rigorous, modular structure for modeling adaptive benchmarking, grounded in information-theoretic principles and practical design guidelines. It enables systematic analysis of when persona-based evaluation is a valid and useful substitute for human judgment, while also quantifying the data requirements for reliable method optimization.

Experiment

  • Compared human benchmark (human evaluators with micro-instrument) and persona benchmark (LLM judges with persona profiles) setups
  • Validated both approaches produce equivalent observable feedback kernels (Q_hum and Q_pers) for the evaluation method
  • Confirmed the algorithm treats aggregate feedback distributions identically regardless of human or persona origin

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Abonnez-vous à nos dernières mises à jour
Nous vous enverrons les dernières mises à jour de la semaine dans votre boîte de réception à neuf heures chaque lundi matin
Propulsé par MailChimp