Command Palette
Search for a command to run...
LLM Personas as a Substitute for Field Experiments in Method Benchmarking
LLM Personas as a Substitute for Field Experiments in Method Benchmarking
Enoch Hyunwook Kang
Abstract
Field experiments (A/B tests) are often the most credible benchmark for methods in societal systems, but their cost and latency create a major bottleneck for iterative method development. LLM-based persona simulation offers a cheap synthetic alternative, yet it is unclear whether replacing humans with personas preserves the benchmark interface that adaptive methods optimize against. We prove an if-and-only-if characterization: when (i) methods observe only the aggregate outcome (aggregate-only observation) and (ii) evaluation depends only on the submitted artifact and not on the algorithm's identity or provenance (algorithm-blind evaluation), swapping humans for personas is just panel change from the method's point of view, indistinguishable from changing the evaluation population (e.g., New York to Jakarta). Furthermore, we move from validity to usefulness: we define an information-theoretic discriminability of the induced aggregate channel and show that making persona benchmarking as decision-relevant as a field experiment is fundamentally a sample-size question, yielding explicit bounds on the number of independent persona evaluations required to reliably distinguish meaningfully different methods at a chosen resolution.
One-sentence Summary
The authors propose that LLM-based persona simulation validly replaces human A/B testing under aggregate-only observation and algorithm-blind evaluation, proving it is indistinguishable from changing populations; they define information-theoretic discriminability to show sufficient persona samples make synthetic benchmarks as decision-relevant as field experiments for reliably distinguishing methods at desired resolution. (58 words)
Key Contributions
- Field experiments (A/B tests) for societal systems are credible but costly and slow, creating a bottleneck for iterative development, while LLM-based persona simulations offer a cheap alternative whose validity as a drop-in benchmark substitute remains uncertain due to potential mismatches in the evaluation interface.
- The paper proves that persona simulations become indistinguishable from a simple population panel change (e.g., New York to Jakarta) if and only if two conditions hold: methods observe only aggregate outcomes (aggregate-only observation) and evaluation depends solely on the submitted artifact, not the algorithm's origin (algorithm-blind evaluation).
- It introduces an information-theoretic discriminability metric for the aggregate channel, showing that achieving decision-relevant persona benchmarking equivalent to field experiments requires sufficient independent persona evaluations, with explicit sample-size bounds derived to reliably distinguish meaningfully different methods at a specified resolution.
Introduction
Field experiments are the gold standard for benchmarking methods in societal systems like marketplace design or behavioral interventions, but their high cost and slow execution severely bottleneck iterative development. Prior attempts to use LLM-based persona simulations as cheaper alternatives face critical uncertainty: it remains unclear whether swapping humans for personas preserves the benchmark's core interface that methods optimize against, especially given evidence of confounding in causal applications where prompt manipulations inadvertently alter latent scenario aspects.
The authors prove that persona simulation becomes a theoretically valid drop-in substitute for field experiments if and only if two conditions hold: (i) methods observe only aggregate outcomes (not individual responses), and (ii) evaluation depends solely on the submitted artifact, not the algorithm's identity or provenance. Crucially, they extend this identification result to practical usefulness by defining an information-based measure of discriminability for the persona-induced evaluation channel. This yields explicit sample-size bounds—showing how many independent persona evaluations are required to reliably distinguish meaningful method differences at a target resolution—turning persona quality into a quantifiable budget question.
Method
The authors leverage a formal framework to model algorithm benchmarking as an interactive learning process, where an algorithm iteratively selects method configurations and receives feedback from an evaluator. This process is structured around three core components: the configuration space, the evaluation pipeline, and the feedback-driven adaptation mechanism.
At the heart of the method is the concept of a method configuration θ∈Θ, which encapsulates all controllable degrees of freedom—such as model weights, prompts, hyperparameters, decoding rules, or data curation policies—that define a system or procedure. Deploying θ yields an artifact w(θ)∈W, which is the object submitted to the benchmark for evaluation. The artifact space W is flexible, accommodating single outputs, stochastic distributions, interaction policies, or agent rollouts, depending on the task.
The evaluation process is modeled as a two-stage pipeline: micro-level judgments are first elicited and then aggregated into a single feedback signal. This pipeline is fully specified by a tuple (P,I,Γ,L), where P is a distribution over evaluators (human or LLM personas), I(⋅∣w,p) is a micro-instrument that generates individual responses from an evaluator p given artifact w, Γ is a deterministic aggregation function mapping L micro-responses to a single observable feedback o∈O, and L is the panel size. The entire evaluation call induces a Markov kernel QP,I(⋅∣w) over O, which represents the distribution of the aggregate feedback for artifact w.
The algorithm operates as an adaptive learner in a repeated “submit-observe” loop. At each round t, it selects a configuration θt (or equivalently, artifact wt) based on a decision kernel πt(⋅∣Ht−1,S), where Ht−1 is the observable history of past submissions and feedback, and S represents any side information available before benchmarking begins. The feedback ot received at round t is drawn from QP,I(⋅∣wt), and the algorithm updates its strategy accordingly.
Two benchmark hygiene conditions are critical to ensure the integrity of this interface. The first, Aggregate-only observation (AO), mandates that the algorithm observes only the aggregate feedback ot and not any micro-level details such as panel identities or raw votes. The second, Algorithm-blind evaluation (AB), requires that the feedback distribution depends solely on the submitted artifact wt and not on the identity or provenance of the algorithm that produced it. Together, these conditions ensure that the evaluation behaves as a well-defined oracle channel, enabling the method to treat the benchmark as a stable environment.
Under these conditions, swapping human evaluators for LLM personas is equivalent to a “just panel change” (JPC) from the method’s perspective: the interaction structure remains unchanged, and the only difference is in the induced artifact-to-feedback kernel Q(⋅∣w). This equivalence is formalized through transcript laws that factorize into submission kernels and artifact-dependent feedback kernels, preserving the method’s information structure regardless of the evaluator type.
To assess the usefulness of such a benchmark—beyond its validity—the authors introduce the concept of discriminability κQ, defined as the infimum of Kullback-Leibler divergence between feedback distributions of artifacts that differ by at least a resolution threshold r under a metric dW. Under a homoscedastic Gaussian assumption, this reduces to the worst-case pairwise signal-to-noise ratio (SNR), which is empirically estimable from repeated evaluations. The sample complexity for reliable pairwise comparisons scales inversely with κQ, requiring approximately L≥κQ2logδ1 independent evaluations to achieve a misranking probability of at most δ.
The choice of dW and r is method-specific and should reflect the developer’s degrees of freedom and minimal meaningful iteration unit. For example, in prompt tuning, dW may be Levenshtein distance over instruction clauses, and r=1 corresponds to a single atomic edit. This operationalization allows practitioners to estimate κQ from pilot runs and derive the required dataset size for stable method comparison.
In summary, the framework provides a rigorous, modular structure for modeling adaptive benchmarking, grounded in information-theoretic principles and practical design guidelines. It enables systematic analysis of when persona-based evaluation is a valid and useful substitute for human judgment, while also quantifying the data requirements for reliable method optimization.
Experiment
- Compared human benchmark (human evaluators with micro-instrument) and persona benchmark (LLM judges with persona profiles) setups
- Validated both approaches produce equivalent observable feedback kernels (Q_hum and Q_pers) for the evaluation method
- Confirmed the algorithm treats aggregate feedback distributions identically regardless of human or persona origin