How can we use the novel capacities of large language models (LLMs) in empirical research? And how can we do so while accounting for their limitations, which are themselves only poorly understood? We develop an econometric framework to answer this question that distinguishes between two types of empirical tasks. Using LLMs for prediction problems (including hypothesis generation) is valid under one condition: no “leakage” between the LLM’s training dataset and the researcher’s sample. No leakage can be ensured by using open-source LLMs with documented training data and published weights. Using LLM outputs for estimation problems to automate the measurement of some economic concept (expressed either by some text or from human subjects) requires the researcher to collect at least some validation data: without such data, the errors of the LLM’s automation cannot be assessed and accounted for. As long as these steps are taken, LLM outputs can be used in empirical research with the familiar econometric guarantees we desire. Using two illustrative applications to finance and political economy, we find that these requirements are stringent; when they are violated, the limitations of LLMs now result in unreliable empirical estimates. Our results suggest the excitement around the empirical uses of LLMs is warranted – they allow researchers to effectively use even small amounts of language data for both prediction and estimation – but only with these safeguards in place.

More on this topic

BFI Working Paper·Feb 23, 2026

Multidimensional Signaling and the Rise of Cultural Politics

Daron Acemoglu, Georgy Egorov, and Konstantin Sonin
Topics: Uncategorized
BFI Working Paper·Feb 2, 2026

Diversionary Escalation: Theory and Evidence from Eastern Ukraine

Natalie Ayers, Christopher W. Blair, Joseph J. Ruggiero, Konstantin Sonin, and Austin Wright
Topics: Uncategorized
BFI Working Paper·Jan 26, 2026

Never Enough: Dynamic Status Incentives in Organizations

Leonardo Bursztyn, Ewan Rawcliffe, and Hans-Joachim Voth
Topics: Uncategorized