Key Takeaways
  • The COVID-19 pandemic reinforced the value of surveys to gather data quickly to inform policymaking.
  • Surveys, though, are replete with biases that researchers try to address to ensure that their results reflect reality. However, one form of bias—nonresponse bias—is often dismissed or marginalized.
  • This work reveals the significance of nonresponse bias and offers methodological improvements in survey design to account for its effects and, thus, to develop surveys that come closer to the truth.

The problem was that there were little to no real-time data. It would be months before reliable statistics were available. Still, policymakers had to act. So, rather than make policy in the dark, officials reached for surveys, the only real-time source to cast at least some light on dimly understood phenomena. Economists and other researchers filled the data gap by conducting all manner of surveys about household and business activity. (See Figure 1 for trends over time.)

However, how accurate are such surveys? What biases lurk within the contact lists from which researchers draw individuals, and among the respondents? Importantly, even if researchers have a contact list that represents the population, what about all those people who do not respond to their surveys? What if the respondents are not representative of real activity or would otherwise skew results? These are questions that apply to all surveys, and not just COVID-related studies, and it is that last question—the impact of nonresponse bias—that motivates “Selection in Surveys,” a working paper that offers new methodological insights into survey design and provides a roadmap for researchers to get closer to the truth.

“Survey says!”

Why does this matter? Readers, for example, are certainly familiar with the population survey conducted every 10 years by the US Census Bureau, which determines congressional representation among states, among other key issues. However, readers likely do not know that the Census Bureau also conducts more than 100 annual surveys of households and businesses.1 One of those surveys, the Household Pulse Survey, was developed in response to COVID-19 to collect and disseminate data “in near real-time to inform federal and state response and recovery planning.”2 Others, like the American Community Survey, are the country’s primary source for detailed population and housing data. Together, these surveys informed the distribution of more than $675 billion in funds during fiscal year 2015, according to a 2017 Census analysis.3

With so much money on the line, and with programs across the nation dependent on the efficacy of survey data, it is important to get them right—or as right as possible. To do so, researchers typically take care to address some issues by obtaining a representative sample of individuals who can be invited to participate in a survey. However, such considerations are rarely extended to issues about who chooses to participate among the invited individuals. This makes nonresponse bias an overlooked danger, with researchers often assuming it does not exist or, if it does, that it can be corrected by reweighting to bring responses more in line with the population.

US Census surveys informed the distribution of more than $675 billion in funds during fiscal year 2015, according to a 2017 Census analysis. With so much money on the line, and with programs across the nation dependent on the efficacy of survey data, it is important to get them right—or as right as possible.

However, these assumptions and conventional practices raise several questions. Does nonresponse bias affect the conclusions drawn from survey data? If so, what causes such biases to occur? Are these effects caused by observed or unobserved differences between participants and nonparticipants? Further, can surveys be designed differently to facilitate detection and correction of these differences?

To answer these and other questions, the authors employ the Norway in Corona Times (NCT) survey conducted by Norway’s national statistical agency. This survey was designed to study the immediate labor market consequences of the COVID-19 lockdown that began in March 2020. The survey has three attractive features for analyzing survey participation and nonresponse bias, including a random sample from the entire adult population; randomly assigned financial incentives for participation; and survey data that were merged with administrative data (from government agencies), allowing the authors to quantify selective participation in the survey, the magnitude of nonresponse bias, and the performance of methods intended to correct for it.

Regarding the third feature of the NCT, the authors use the linked survey-administrative data to examine nonresponse bias. They draw three broad conclusions: 

  1. In the administrative data, the labor market outcomes of those who participated in the NCT survey are substantially different from those who did not participate. If these outcomes had been responses to survey questions (as they often are), there would have been large nonresponse bias in the survey. Correcting for differences in a rich set of observables would have done little to reduce this bias. 
  2. Using the randomized incentives to conduct the same comparison within each incentive group in the NCT survey, the authors find that trying to mitigate nonresponse bias by increasing incentives to participate can backfire: even though participation rates increase with incentives, nonresponse bias does too. 
  3. There are also large differences between incentive groups in their responses to NCT survey questions, which persist after adjusting for observables, consistent with the finding in the administrative data that differences between participants and nonparticipants are primarily due to unobservable factors.

The authors’ analysis of existing survey techniques, as well as their theoretical analysis based on a novel model, yields four contributions summarized below. Readers inclined to a deep discussion of the authors’ innovative methodology will find plenty to engage them in the full paper. For the purposes of this Brief, though, let’s sketch a simple picture to get a basic view of their ideas at work. Imagine that you conduct a survey with a sample that is representative of the whole population. You randomly offer different levels of a financial incentive to participate, say, from zero incentive, to $5, and $10. At each level of incentive, there are certain people who will not respond—the incentive is not high enough. Higher incentives will encourage people who would have otherwise not participated to respond. Depending on whether these people make the pool of respondents more or less similar to the population, higher incentives may either reduce or increase nonresponse bias.

Further, there is another layer of nonresponse bias that researchers rarely, if ever, consider, and that involves those nonrespondents who are never aware of the survey in the first place. They may, for example, never see the emails or answer their phone or otherwise remain unaware. If this group of non-responders is large enough, it could mean that a key part of the population is missing.

The authors’ model is unique in that it accounts for both forms of nonresponse bias—those who decline to participate and those who are unaware of the survey—by introducing randomization into financial incentives, and by also accounting for the likely views of unaware nonrespondents. It is useful to note that randomizing financial incentives does not necessarily make surveys more expensive to administer; rather, randomization can take existing resources that would have been used anyway and apply them in a random fashion. Here, then, are this work’s main contributions:

  • This research describes how financial incentives may not only increase participation rates, but they can also be used to test and correct for nonresponse bias due to unobserved differences between participants and nonparticipants. The key is randomization in the assignment of incentives.
  • What matters for nonresponse bias is not participation rates, but who participates. Counter to common guidance on survey design, nonresponse bias may well increase with participation rates.
  • The authors’ findings in the NCT survey illustrate the type of situation where researchers should consider implementing methods that can correct for nonresponse bias due to selection on unobservable characteristics. The authors implement several such methods and show that some widely used reweighting methods intended to correct for selection on observables can exacerbate nonresponse bias by amplifying unobservable differences.
  • The final contribution is methodological and points the way forward for researchers to design better surveys that test and account for unobservables, and to develop models that account for unobserved heterogeneity in all its forms.

CLOSING TAKEAWAY
The authors’ model accounts for both forms of nonresponse bias—those who decline to participate and those who are unaware—by introducing randomization into financial incentives, and by also accounting for the likely views of unaware nonrespondents.

Conclusion

Surveys are ubiquitous in academic research and key to much policymaking, including for making decisions about the disbursement of limited public resources. This research focuses attention on an often-marginalized issue—nonresponse bias—and shows that researchers dismiss this factor at the cost of a more truthful survey.

The authors utilize the Norway in Corona Times survey, which randomly assigned participation incentives, to show how incentives are useful in detecting nonresponse bias in both linked administrative outcomes and in the survey itself. Importantly, the authors find that both sets of outcomes reveal large nonresponse bias, even after correcting for observable differences between participants and nonparticipants.

This work also offers methodological improvements to allow for unobservable differences between participants and nonparticipants. It offers a model that incorporates these and other enhancements and improves upon existing models to incorporate bias that is due to both active (declining to participate) and passive (not seeing the survey invitation) nonresponse. The authors’ model hews closer to the data and, thus, offers results closer to the truth.


1 List of Surveys: census.gov/programs-surveys/surveyhelp/list-of-surveys.html.

2 Measuring Household Experiences During the Coronavirus Pandemic: census.gov/data/experimental-data-products/household-pulse-survey.html.

3 Uses of Census Bureau Data in Federal Funds Distribution: census.gov/library/working-papers/2017/decennial/census-data-federal-funds.html.

More on this topic

Research Briefs·Oct 2, 2024

Moving to Opportunity, Together

Seema Jayachandran, Lea Nassal, Matthew J. Notowidigdo, Marie Paul, Heather Sarsons, and Elin Sundberg
When heterosexual couples in Germany and Sweden relocate, men’s earnings increase by 5-10%, while women’s do not change. Couples are more likely to relocate when the man, rather than the woman, is laid off. These gaps appear at least in...
Topics: Employment & Wages
Research Briefs·Jul 18, 2024

Historical Differences in Female-Owned Manufacturing Establishments: The United States, 1850-1880

Ruveyda Gozen, Richard Hornbeck, Anders Humlum, and Martin Rotemberg
During the late 1800s, manufacturing establishments owned by females were smaller than those owned by males and had lower capital-to-output ratios. Female-owned establishments employed more women and paid women higher wages, and were concentrated in sub-industries like women’s clothing and...
Topics: Employment & Wages
Research Briefs·May 23, 2024

Can You Erase the Mark of a Criminal Record? Labor Market Impacts of Criminal Record Remediation

Amanda Y. Agan,  Andrew Garin, Dmitri K. Koustas, Alexandre Mas, and Crystal Yang
Removing a previously obtained criminal record does not improve labor market outcomes, on average, with the notable exception of participation in gig work through online platforms.
Topics: Economic Mobility & Poverty, Employment & Wages