Summary analysis of the latest research from UChicago scholars, complementing the BFI Working Paper series that draws from more than 200 economists on campus.
How do Americans respond to receiving an unexpected financial windfall or, in economic parlance, an idiosyncratic and exogenous change in household wealth and unearned income? For example, do they work less? And how much of the windfall do they spend? The answers to these and other questions matter as policymakers consider the income and wealth effects of policies ranging from taxation to a universal basic income (UBI).
Researchers have long struggled to find variation in wealth or unearned income that is both as good as random and specific to an individual as opposed to economy- wide. Such variation is necessary to isolate the effects of changes in wealth or unearned income, holding fixed other determinants of behavior such as preferences and prices. The authors address this challenge by analyzing a wide range of individual and household responses to lottery winnings between 1999 and 2016, and then exploring the economic, and policy, implications.
Their primary findings are three-fold:
- First, the authors find significant and sizable wealth and income effects. On average, an extra dollar of unearned income in a given period reduces pre-tax labor earnings by about 50 cents, decreases total labor taxes by 10 cents, and increases consumption by 60 cents. These effects differ across the income distribution, with households in higher quartiles of the income distribution reducing their earnings by a larger amount.
- Next, the authors develop and apply a rich life-cycle model in which heterogeneous households face non- linear taxes and make earnings choices both in terms of how many people work (extensive margin) and how much a given number of people work, on average (intensive margin). By mapping their model to their estimated earnings responses, the authors obtained informative bounds on the impacts of two policy reforms: an introduction of UBI and an increase in top marginal tax rates.
- Finally, this work analyzes how additional wealth and unearned income affect a wide range of behavior, including geographic mobility and neighborhood choice, retirement decisions and labor market exit, family formation and dissolution, entry into entrepreneurship, and job-to-job mobility.
As an example of this work’s insight into policymaking, the authors’ comprehensive and novel set of analyses demonstrates that the introduction of a UBI will have a large effect on earnings and tax rates. Even if one abstracts from any disincentive effects from higher taxes that are needed to finance a UBI, each dollar will reduce total earnings by at least 52 cents and require an increase in tax rates that is roughly 10 percent higher than what would have been in the absence of any behavioral earnings responses. For example, given average household earnings of roughly $50,000, a UBI of $12,000 a year would reduce average household earnings by more than $6,000 and require an earnings surcharge of approximately 27 percent on all households, out of which 2.5 percentage points is due to the behavioral response.
Another example of this work’s application reveals the effect of a financial windfall on people’s decision to move. Winning a lottery leads to an immediate, one-off increase in the annual moving rate of approximately 25 percent. Lower-income households, younger households, and renters constitute the groups that are most responsive t a change in wealth in terms of geographic mobility. One striking finding is that households do not systematically move to neighborhoods that are typically-measured (using local-area opportunity indices, poverty rates, and educational attainment) as having higher quality. This is true even for parents with young kids. This finding indicates that pure unconditional cash transfers do not lead households to systematically move to locations of higher quality, suggesting that non-financial barriers must play a big role.
Researchers have long investigated the effects of business cycles on households, with findings ranging from little effect on social welfare (or welfare costs) to more significant effects, including with variation across households. However, according to this new paper, focusing on shocks related to business cycle fluctuations masks a key point: all idiosyncratic shocks matter, and those unrelated to business cycles matter a great deal. These idiosyncratic shocks can come in the form of, for example, the death of a prime wage earner or a sudden job layoff unrelated to a recession, such as the recent pandemic.
To the point, Constantinides estimates that the benefits of eliminating idiosyncratic shocks to consumption unrelated to the business cycle are 47.3% of the utility of a member of a household. This may be more concretely characterized by saying that the welfare gain is equivalent to that associated with an increase in the path of a consumer’s level consumption by 47.3%, state by state, date by date. By contrast, the benefits of eliminating idiosyncratic shocks to consumption related to the business cycle are 3.4% of utility and the benefits of eliminating aggregate shocks are 7.7% of utility.
Broadly described, Constantinides derives these estimates by:
- distinguishing between idiosyncratic shocks related to the business cycle and shocks unrelated to the business cycle,
- recognizing that idiosyncratic shocks are highly negatively skewed,
- calibrating welfare benefits via a model using household-level consumption data from the Consumer Expenditure Survey,
- explicitly targeting moments of household consumption,
- assuming that households are responsive to
- and incorporating relevant information from the market.
These new estimates on the effect of idiosyncratic shocks are substantially higher than earlier estimates and should give policymakers pause. Constantinides argues that policymakers should focus on how they can insure households against idiosyncratic shocks unrelated to the business cycle. This is not to say that policies which address aggregate consumption, that is, enacting monetary and fiscal policy in reaction to a recession, do not matter; of course, they do, and this work finds that such policies likely matter more than previously understood. What this work finds, though, is that the welfare benefits of eliminating idiosyncratic shocks unrelated to the business cycle are much higher—the Coronavirus Aid, Relief, and Economic Security (CARES) Act being a case in point.
By way of example, see the accompanying figure for estimates of the impact on household financial viability following passage of the CARES Act, which was signed into law in March 2020 to address the economic shock of the COVID-19 pandemic. This figure reveals the many US households, especially at lower income levels, that would have lost financial viability relatively quickly without the relief provide by the CARES Act.
For the investor hoping to insure her investments against possible risks, the list of hazards is nearly limitless. She might worry about risks stemming from climate change, political instability, health care crises like pandemics, wild swings in GDP growth, and a host of others. To hedge against such shocks, an investor might tailor her portfolio by making investments that, in effect, insure against specific risks. For example, an investor that is worried about climate risks will look for investments that increase in value when climate risks materialize.
One natural way to buy insurance against specific risks is to use derivative markets. For example, an investor worried about inflation can buy so-called “inflation swaps” that specifically target inflation. For many risks, however, there are no derivative markets that investors can directly access. For example, there isn’t a clear market where one can insure against climate risks.
If derivative markets are not available, investors can still try hedging the risks by building portfolios that provide similar insurance out of assets that are actually tradable (like equities). There are two fundamental obstacles in doing so:
First, building a portfolio of equities that insures against a particular risk, and only that particular risk, requires taking a stand on what other risks are important to investors. This allows the investor to focus on only the risk they are interested in hedging.
Second, it requires the assets that one wants to use to build the portfolio to actually be substantially exposed to those risks. As an example, one can easily build a portfolio that hedges climate risks if one can identify assets that are highly exposed to it (e.g., green companies that do well when the climate deteriorates). In other cases, however, this is more difficult; for example, one may want to insure against fluctuation in aggregate consumption, but most stocks are only weakly related to this risk, so the hedging portfolio will have poor hedging properties.
New research by Stefano Giglio, Dacheng Xiu, and Dake Zhang, which builds on earlier work, offers a methodology that aims to address both issues by exploiting the benefits of dimensionality. They show that even if the true risk factors that drive asset prices are not known, statistical techniques (principal component analysis) can be used to extract from a large panel of returns from a set of factors that help isolate the risk of interest (e.g., climate risk) from all other risk factors.
In addition, and most importantly, the methodology also addresses the issue of weak exposure of the assets to the factor of interest. The idea is simple: identify – using statistical methods – among the universe of assets those assets that are most exposed to the risk of interest. For example, in the case of aggregate consumption, the methodology will identify those stocks that have historically exhibited high co-movement with consumption. The hedging portfolio will then use only those, more informative, assets. All other stocks are discarded.
More generally, the authors argue that the strength or weakness of a risk factor, that is, whether many assets or only a few are exposed to that risk, should not be viewed as a property of the factor itself; rather, it should be viewed as a property of the set of test assets used in the estimation. As another example, a liquidity factor may be weak in a cross-section of portfolios sorted by, say, size and value, but may be strong in a cross-section of assets sorted by characteristics that capture exposure to liquidity. Their methodology, called “supervised PCA,” or SPCA, exploits this insight and builds a hedging portfolio for any risk factor appropriately accounting for other risk factors investors might care about, and independently on the strength of the factor.
SPCA is not the endgame in the effort to understand how to build hedging portfolios, according to the authors. However, this work shows that systematically addressing the issue of weak factors in empirical asset pricing is an important step forward and opens the door to the study of factors that, while important to investors—like our hypothetical investor from above—may be not as pervasive as they fear.
Gross Domestic Product, GDP, is the most widely used measure of economic activity and one that is very attractive for governments to manipulate. Although the incentive to overstate economic growth is shared by governments of all kinds, the checks and balances present in strong democracies plausibly help to prevent this behavior. In contrast, these checks and balances are largely absent from autocracies. The execution of the civil servants in charge of the 1937 population census of the USSR due to its unsatisfactory findings serves as an extreme example, but a more recent instance involves Chinese premier Li Keqiang’s alleged admission of the unreliability of the country’s official GDP estimates.
To detect and measure the manipulation of economic statistics in non-democracies, Martinez uses data on night-time lights (NTL) captured by satellites from outer space. Importantly, NTL correlate positively with real economic activity but are largely immune to manipulation. Martinez employs data for 184 countries to examine whether the elasticity of GDP with respect to NTL systematically differs between democracies and autocracies, based on the Freedom in the World index produced by Freedom House. These data are combined with a measure of average night-time luminosity at the country-year level using granular data from the Defense Meteorological Satellite Program’s Operational Line-scan System (DMSP-OLS) for the period 1992-2013, along with GDP data from the World Bank.
Martinez finds that the same amount of growth in NTL translates into higher reported GDP growth in autocracies than in democracies. His main estimates suggest that autocracies overstate yearly GDP growth by approximately 35% (for example, a true growth rate of 2% is reported as 2.7%). The autocracy gradient in the NTL elasticity of GDP is not driven by differences in a large number of country characteristics, including various measures of economic structure or level of development. Moreover, this gradient in the elasticity is larger when the incentive to exaggerate economic growth is stronger or when the constraints on such exaggeration are weaker. This strongly suggests that the overstatement of GDP growth in autocracies is the underlying mechanism.
These results constitute new evidence on the disciplining role of democratic institutions for the functioning of government. These findings also provide a warning for academics, policy-makers and other consumers of official economic statistics, as well as an incentive for the development and systematic use of alternative measures of economic activity.
As of 2020, more than 38 million people were displaced across borders, with most fleeing war or chronic insecurity in their origin countries, often for long durations. As a result, these forcibly displaced people, or FDP, are acutely vulnerable, facing tenuous legal status, political exclusion, poverty, poor access to services, and outright hostility, which can be exacerbated by hostilities directed toward people of differing identities.
Despite the magnitude of this challenge, few practicable policy responses exist. Fewer than 2% of all FDP have accessed any of the three “durable solutions”—resettlement in the Global North, naturalization in host countries, or repatriation to origin countries—in recent years, while efforts within the Global South are politically contentious. Since 2000, the number of resettled FDP has never exceeded 0.61% of the global displaced stock. Similarly, since 85% of FDP reside in developing countries with weak institutional capacity, naturalization in host states is complicated. Finally, though refugee return is widely regarded as the preferred solution, protracted conflicts in origin countries often render repatriation infeasible.
A number of recent policies have employed cash transfers to ease reintegration for FDPs, but there is little causal evidence for their effectiveness to date. This article advances understanding of refugee return by leveraging granular microdata on repatriation and violence, in tandem with a large cash grant scheme implemented by the United Nations High Commissioner for Refugees (UNHCR) in 2016. The cash program was aimed at Afghan returnees from Pakistan, and saw a temporary doubling of cash assistance offered to voluntary repatriates. Using a novel combination of observational and survey-based measures, the authors find the following, among other results:
- Refugee return is associated with an overall reduction, as well as a composition shift, in insurgent violence. The authors note that the cash transfer that induced repatriation may have stimulated local economic activity in areas where returnees settled.
- Social capital and preexisting kinship ties moderate the potential for refugee repatriation to spark local conflicts. Recent work has shed light on optimal settlement strategies when refugees aim to rebuild their lives in host countries, and this research clarifies how a similar intervention could be used to evaluate when, where, and with whom returning refugees should be located.
- Local institutions for conflict mediation may play a critical role for preempting conflicts before they emerge or resolve disputes after they have. The authors anticipate that local support for conflict resolution could also be tied to preexisting risk factors including customary land tenure, livestock grazing patterns, vulnerability of irrigation networks, and heterogeneous ethnic settlement patterns.
As the authors stress, and as their full paper describes, the impacts of refugee repatriation are nuanced, as are the ethical considerations relevant to programmatic interventions aimed at facilitating return. Active conflict further complicates matters. If repatriation assistance is employed to appease asylum countries eager to reduce their refugee-hosting burden, it risks inadvertently incentivizing coercive tactics and degrading the voluntariness of repatriation. Crafting sound policies requires considering the illicit, armed actors that may benefit from the return of vulnerable populations, the quality of institutions available to manage tensions around mass repatriation, and the ethical obligations of host countries.
Health insurance contracts account for 13% of US gross domestic product, and impose many different administrative burdens on physicians, payers, and patients. The authors measure one key administrative burden—billing insurance—and ask whether it distorts physicians’ behavior and harms patients.
Doctors and insurers often have trouble determining what care a patient’s insurance covers, and at what prices, until after the physician provides treatment. This ambiguity leads to costly billing and bargaining processes after care is provided, what the authors call the costs of incomplete payments (CIP). They estimate these costs across insurers and states and show that CIP have a major impact on Medicaid patients’ access to medical care.
Employing a unique dataset, the authors show that payment frictions are particularly large in the context of Medicaid, a key part of the US social safety net, but which rarely provides an equal quality of care as other insurance. In particular, Medicaid patients often have trouble finding physicians willing to treat them.
The authors find that 25% of Medicaid claims have payment denied for at least one service upon doctors’ initial claim submission. Denials are less frequent for Medicare (7.3%) and commercial insurers (4.8%).
How do these denials affect physician revenues? The authors’ CIP incorporates two concepts: foregone revenues, which are directly measured in the remittance data; and the estimated billing costs that providers accumulate during the back-and-forth negotiations with payers. Bottom line: The authors estimate that CIP average 17.4% of the contractual value of a typical visit in Medicaid, 5% in Medicare, and 2.8% in commercial insurance. The authors stress that these are significant losses, especially considering the relatively low reimbursement rates offered by Medicaid.
Further, the authors reveal that CIP dissuades doctors from taking Medicaid patients in the first place. A ten percentage point increase in CIP is analogous to a tax increase of ten percentage points. By examining physicians who move across states, the authors then estimate that an implicit tax increase of this magnitude reduces physicians’ probability of accepting Medicaid patients by 1 percentage point. This effect is even larger across states within a physician group. Each standard deviation increase in CIP reduces Medicaid acceptance by 2 percentage points.
This work reveals the importance of well-functioning business operations in the provision of healthcare. The key insight, that difficulty with payment collection compounds the effect of low payment rates to deter physicians from treating publicly insured patients, should give policymakers pause.
From 2000 to 2012, official development assistance (ODA) to conflicted states grew more than 10% per year, and totaled over $450 billion, including $120 billion to Afghanistan and $80 billion to Iraq from the United States alone. Donor nations expect foreign aid to improve stability in fragile states, in addition to furthering development, but the effectiveness of such aid is far from certain.
One prevailing challenge for aid assistance is known as donor fragmentation, wherein a multiplicity of donors shares overlapping responsibilities within a common geographical area. Donor fragmentation is widely perceived to negatively moderate the effectiveness of aid and thereby limit the quality of institutions on a number of fronts, including coordination challenges, program redundancies, selection of inferior projects due to competition among donors, lax donor scrutiny, among others.
That said, the presence of multiple foreign donors can foster exemplary norms of professional conduct when aid provisions are maintained at relatively moderate rates and competition is not pronounced. Under these and other conditions, good conduct by donors is more likely to prevail and donor proliferation may actually strengthen institutions.
Until now, these issues have been subject to little empirical scrutiny. In this work, the authors use granular data from Afghanistan to offer the first micro-level analysis of aid fragmentation and its effects. The authors results suggest that aid strengthens the quality of state institutions in the absence of fragmentation (that is, in the presence of a single donor). These benefits vanish, though, as the donor landscape becomes fragmented. Surprisingly, however, their evidence does suggest that donor fragmentation also positively affects institutions when considered at moderate levels of aid. The authors’ micro-level evidence therefore suggests the direction of fragmentation’s total effect depends on the volume of aid provision. Too much provision through too much fragmentation induces instability.
Given the paucity of theoretical and empirical research on this topic, the authors hope that this work inspires further academic research. With more nuanced theory development and broader geographical analyses, additional new insights can be generated to guide decisionmakers at various levels of aid provision.
Why did the Black-White wage gap drop so much during the 1960s and the 1970s, and why has the convergence stagnated since then? This new working paper builds on existing research to offer a pathbreaking task-based model that incorporates notions of both taste-based and statistical discrimination to shed light on the evolution of the racial wage gap in the United States over the last 60 years.
Their task-based model allows the authors to analyze how the changing demands for certain tasks interact with notions of discrimination and racial skill gaps in driving trends in wages across racial groups. At the heart of the model is that different occupations require a different mixture of tasks (Abstract, Routine, Manual, Contact), which in turn demand certain market skills and degrees of interaction among workers and customers. Consequently, the relative intensity of taste-based versus statistical discrimination varies across occupations depending on the exact mix of tasks required in each occupation.
The authors use their estimated framework to structurally decompose the change in racial wage gaps since 1960 into the parts due to declining taste-based discrimination, a narrowing of racial skill gaps, declining statistical discrimination, and changing market returns to occupational tasks. Their key finding is that the Black-White wage gap would have shrunk by about 7 percentage points by 2018 if the wage premium to task requirements were held at their 1980 levels, all else equal.
Why did this stagnation in the closing of the wage gap occur? The authors posit two offsetting forces:
- On the one hand, a narrowing of racial skill gaps and declining discrimination between 1980 and 2018 caused the racial wage gap to narrow by 6 percentage points during this period, all else equal.
- On the other hand, the changing returns to tasks since 1980 (particularly the increasing return to Abstract tasks) widened the racial wage gap by about 6.5 percentage points during the same period. A rise in the return to Abstract tasks disadvantages Blacks because they are underrepresented in these tasks due to racial skill gaps and discrimination. Moreover, to the extent that discrimination associated with Abstract tasks is important, the rising return to Abstract tasks will even favor Whites relative to Blacks with the same underlying levels of skills.
- Bottom line: Race-specific barriers have continued to decline in the US economy post 1980, but the rising relative return to Abstract tasks has favored Whites. As a result, the Black progress stemming from narrowing racial skill gaps and/or declining discrimination did not translate into Black-White wage convergence during this period.
The authors stress that racial gaps in skills are endogenous, meaning that taste-based discrimination could be responsible for Black-White differences in measures of cognitive test scores. Such caveats should be kept in mind when segmenting current racial wage gaps into parts due to taste-based discrimination and parts due to differences in market skills. Regardless of the reason for the racial skill gaps associated with a given task, the existence of such gaps implies that changes in task returns can have meaningful effects on the evolution of racial wage gaps, even when discrimination and the skill gaps remain constant over time.
The growth of sustainable investing is one of the most dramatic trends in the investment industry over the past decade, with sustainable strategies comprising one-third of current professionally managed US assets. Environmental concerns take the lead among sustainable investors; for example, 88% of the clients of BlackRock, the world’s largest asset manager, rank environment as “the priority most in focus.” Further, based on past performance, asset managers often market sustainable investment products as offering superior risk-adjusted returns; however, this work reveals that investors should be wary of such claims.
The authors employ a novel model which predicts that “green” assets have lower expected returns than “brown,” due to investors’ tastes for green assets, yet green assets can have higher realized returns while agents’ tastes shift unexpectedly in the green direction. This wedge between expected and realized returns is central to the paper. The authors explain that green tastes can shift in two ways:
- First, investors’ preference for green assets can increase, directly driving up green asset prices.
- Second, consumers’ demands for green products can strengthen, for example, due to environmental regulations, driving up green firms’ profits and, thus, their stock prices. Similarly, investors’ preference for brown assets or consumers’ demand for brown products can decrease, again making green stocks outperform.
Bottom line: green stocks typically outperform brown when climate concerns increase. Equilibrium expected returns of stocks that are better hedges against adverse climate shocks include a negative hedging premium if the representative investor is averse to such shocks. Empirically confirming a climate risk premium, however, must confront the large unanticipated positive component of green stock returns during the last decade. Without accounting for those unexpectedly high returns on stocks that appear to be relatively good climate hedges, one could be led astray. That is, one could infer that those stocks providing better climate hedging have higher expected returns, not lower, as theory predicts.
People experiencing homelessness are among the most deprived individuals in the United States, yet they are neglected in official poverty statistics and the extreme poverty literature and largely omitted from household surveys. Those wishing to learn about the economic circumstances of this population must turn to a handful of studies that are either localized, outdated, self-reported, or some combination of the three.
In this unprecedented project, the authors draw on underused data sources and employ novel methods to address these shortcomings to assess the permanence or transience of low material well-being among those who experience homelessness, the coverage of the safety net, and the implications of the current omission of this population from official statistics. Among other findings, the authors reveal the following:
- Nationally, only a small share of sheltered homeless adults in 2011-2018, about 9.1 percent, changed states in the year before their interview. While this is higher than one-year interstate mobility for the housed population, it is still lower than one might expect given the rhetoric on this subject. Further, longer-term measures of mobility since birth indicate only small differences between the homeless and comparison groups, suggesting that the link between mobility and homelessness is not as strong as suggested in public discourse.
- There are much higher rates of physical limitations relative to the housed population and moderately higher or similar rates of physical limitations relative to the poor comparison group.
- There is a stark disparity in the share reporting a cognitive limitation. Nearly one-quarter of the sheltered homeless ages 18-64 reports difficulty remembering or making decisions, a rate that is approximately twice that of the poor comparison group and 5.5 times that of the housed population in this age range. Cognitive limitations appear to be a significant factor distinguishing the sheltered homeless from the rest of the poor.
- Homelessness appears to be a symptom of long-term low material well-being. In other words, people experiencing homelessness appear to be having not just a year of deprivation and challenge, but a decade (at least).
- About 53 percent of the sheltered homeless had formal labor market earnings in the year they were observed as homeless, and the authors’ find that 40.4 percent of the unsheltered population had at least some formal employment in the year they were observed as homeless. This finding contrasts with stereotypes of people experiencing homelessness as too lazy to work or incapable of doing so.
- Most people experiencing homelessness are reached by some form of social safety net program, primarily SNAP and Medicaid, with at least 88 percent of the sheltered and 78 percent of the unsheltered receiving at least one benefit.
- Finally, there is a higher rate of receipt for nearly all benefits among the sheltered relative to the unsheltered homeless. Among other explanations, the authors suggest the influence of family structure, as many safety net programs are more readily available to families (who are more likely to be in shelters) than single adults.
This project is ongoing, as the authors plan to continue their examination of their novel data sources to explore several other topics related to homelessness, including transitions in and out of homelessness, migration and geographic dispersion, and mortality.
It follows that if physical distancing reduces interpersonal transmission risks related to the COVID-19 virus, then government policies that mandate physical distancing should slow the spread of COVID-19. Further, local non-compliance with such shelter-in-place orders would create public health risks and could cause regional spread. Given this, it is important that policymakers understand which local factors impact compliance with public health directives.
Recent research highlights several factors that influence compliance, including partisanship, political polarization, poverty and economic dislocation, and differences in risk perception, all of which influence physical distancing in the absence of government mandates. This new research highlights the role of science skepticism and attitudes regarding topics of scientific consensus in shaping patterns of physical distancing.
To examine the role of science skepticism, the authors leverage the most granular, representative data on science skepticism in the United States—beliefs about the anthropogenic (human) causes of global warming—to study how physical distancing patterns vary with skepticism toward science. The authors combine this county-level science skepticism measure with location trace data on the movement of around 40 million mobile devices as well as data on state-level shelter-in-place policies, to find the following:
- Science skepticism is likely an important determinant of local compliance with government shelter-in-place policies, even after accounting for the role of partisanship, population density, education, and income, among other factors.
- Shelter-in-place policies increase the proportion of devices that stay at home by 2 p.p. (p-value < 0.001) more in counties with low levels of science skepticism compared to counties with high levels of skepticism. This corresponds to an 8% increase in devices that stayed at home, compared to the February average of 25%.
The authors also benchmark their measure of science skepticism against other measures of belief in science available at the state-level to show that their measure captures a more general notion of skepticism toward topics of scientific consensus.
In the United States, the Social Security Disability Insurance and Supplemental Security Income programs together provide access to health insurance and $200 billion annually in cash benefits to nearly 13 million Americans, primarily as assistance for people who cannot work because of severe health conditions. Some have attributed the expansion of US disability programs at least in part to non-health factors like stagnating wages, along with widespread concern that providing benefits to individuals without severe health conditions dilutes the programs’ value.
This issue raises an important question: What is the overall insurance value of US disability programs, including value from insuring non-health risk? To address this question, the authors quantify the extent to which these programs insure different risks by comparing disability recipients and non-recipients along a wide variety of health and non-health dimensions, including consumption, adverse events like job loss, and resources available to cope with adverse events, as well as other comparisons.
The authors’ approach allows them to go “beyond health” when determining the value of such programs. While health is likely a strong indicator of the value of receiving disability benefits, it is not a perfect indicator because individuals face major non-health risks as well, including job loss, productivity shocks, and changes in family structure. To the extent that a particular risk is not completely insured by other means, disability insurance potentially insures or exacerbates that risk, depending on whether people receive disability benefits.
The authors perform a series of measurements and find that less-severe disability recipients are on average much worse off than less-severe non-recipients, and by many non-health measures are even worse off than more-severe recipients. For example, they find that prior to receiving disability benefits, less-severe recipients are 40% more likely to have experienced a mass layoff than more-severe recipients, 19% more likely to have experienced a foreclosure, and 23% more likely to have experienced an eviction.
Further, the authors show that the value of disability benefits exceeds that of cost-equivalent tax cuts by 64%, creating a surplus worth $8,700 of government revenue per recipient per year. Moreover, they find that the high value of US disability programs is in part because of, not despite, mismatches with respect to health. They estimate that benefits to less-severe recipients create a value (insurance benefit less distortion cost) over cost-equivalent tax cuts of $7,700 per recipient per year, about three-fourths that of benefits to more-severe recipients ($9,900).
Bottom line: Benefits to less-severe recipients do not decrease the value of US disability programs; rather, they increase it considerably, accounting for about half of the total value.
The authors draw an important conclusion from their work—no program exists in a vacuum, Instead, a program’s effects reflect the diversity of risks in the economy, how well insured those risks are by other programs and institutions, and how its tags and screens select on those risks.
In this case, US disability programs insure risks well beyond health, and this “incidental” role is central to their overall value. Other programs might also provide similar returns.
Since the 1970s, stagnating average earnings and rising earnings inequality in US labor markets have spurred academic research and fired policy debates. This issue has only intensified in recent decades as attention has focused on the plight of male workers in industries and regions facing economic decline. Despite this interest, existing research has provided little insight into trends in lifetime earnings, offering only point-in-time analysis of annual incomes.
In a first-of-its-kind study, this paper addresses this gap by constructing measures of lifetime earnings for millions of individuals using a 57-year-long panel (1957–2013) from US Social Security Administration (SSA) records. The authors’ lifetime earnings measure is based on 31 potential working years between ages 25 and 55, which allows them to construct lifetime earnings statistics for 27 year-of-birth cohorts. The oldest cohort turned age 25 in 1957, and the youngest one turned age 55 in 2013, the last year of their sample.
The authors examine how lifetime earnings of the median male worker changed from the first cohort (1957) to the last (1983). [They also examine changes in women’s roles in the labor market over this period. See related Research Brief.] Their analysis reveals the following key fact: The lifetime earnings of the median male worker declined by 10% from the 1967 cohort to the 1983 cohort. Perhaps more strikingly, more than three-quarters of the distribution of men experienced no rise in their lifetime earnings across these cohorts. Accounting for rising employer-provided health and pension benefits partly mitigates these findings but does not alter the substantive conclusions.
How are these changes reflected in wage/salary earnings? When nominal earnings are deflated by the personal consumption expenditure (PCE) deflator, the annualized value of median lifetime wage/salary earnings for male workers declined by $4,400 per year from the 1967 cohort to the 1983 cohort, or $136,400 over the 31-year working period. (When the authors adjusted for inflation using the consumer price index, the decline in median male lifetime earnings is nearly twice as large.)
For policymakers, these findings are sobering, and important. For example, the authors show that newer cohorts of workers were already different from older ones by age 25. Once in the labor market, the earnings distribution for these newer cohorts evolved similarly to those of older cohorts. Further, the authors’ findings suggest that the sources of the dramatic changes in the US earnings distribution over the last 50 years may be found in the experiences of newer cohorts during their youth (and possibly earlier). To illustrate, please see Figure 2, which reveals that the decline in median earnings at age 25 continued until 1993, after which time there was a brief resurgence followed by another period of decline. In 2009, median earnings for 25-year-old males were at their lowest point since 1958.
While research has offered insights into the economic costs of civil conflict, the effect on investment decisions is little understood. Do producers forgo profitable investment opportunities when faced with the uncertainties surrounding civil conflict? If so, such missed investment could restrict economic growth and further exacerbate cycles of violence.
The authors address this research gap by examining the effect of civil conflict on investment by Colombian farmers using granular credit data from the country’s largest agricultural bank, Banco Agrario de Colombia (BAC). BAC is the only source of formal credit in many rural areas, and the authors’ dataset includes the universe of the bank’s business loans to small producers between 2009 and 2019 (2.9 million), corresponding to 1.7 million different applicants, which is equivalent to 64% of the country’s agricultural producers. These data also have unique features pertaining to timing, applicant status, and loan outcomes.
The authors examine variation in conflict arising from the 2016 demobilization agreement signed by the Colombian government and FARC, the Marxist guerrilla group fighting against the government in a civil conflict that ravaged the Colombian countryside for over 50 years, with an estimated death toll exceeding 200,000 victims. The authors calculate total FARC activity per municipality between 1996 and 2008, the most violent years in the conflict, and then rank those municipalities according to conflict exposure. This allows them to compare credit outcomes based on FARC exposure.
Their findings include the following:
- The end of the conflict leads to a sizable increase in credit to small farmers in municipalities with high FARC exposure, about 19 million Colombian pesos ($14,500) in total monthly credit disbursements per 10,000 inhabitants, equivalent to a 17% increase over the sample average. This increase is driven by higher loan applications, without any meaningful change in supply-side factors, including approval rates and interest rates.
- The increase in the demand for credit in FARC municipalities is disproportionately driven by new clients with lower wealth and longer-term investments (i.e., higher loan maturity). Importantly, there is no change in the average credit score of loan applicants, nor in delinquency rates for new or outstanding loans over various time horizons.
- There are significant heterogeneous effects across time and space, that is, the authors find no evidence of an increase in credit demand during the interim negotiations period, despite a substantial de-escalation of the conflict. This suggests that armed group presence and uncertainty about renewed violence affect investment more than contemporaneous intensity. Moreover, the increase in credit demand is concentrated in municipalities close to markets.
Taken together, these findings provide key insight into the effect of civil conflict on investment decisions. While this research does not capture the macroeconomic impact of the peace agreement, it does provide evidence suggestive of a broadly positive economic impact. First, the fact that farmers are demanding more credit and paying back their loans suggests that these are profitable investments. Also, in-person audits of project sites indicate that farmers are generally using the funding for the declared purpose. Finally, the documented increase in nighttime luminosity in FARC municipalities following the peace agreement is consistent with a broad expansion of local economic activity, which arguably contributes to higher returns to investment and greater demand for credit.
At least theoretically, citizens can combat corruption among elected officials by voting out the perpetrators and electing other candidates. Despite this option, corruption persists. Research has suggested that citizens lack the information necessary to vote out bad actors. Still other research shows that even with adequate information, voters do not respond as expected. What explains this phenomenon?
This research sheds new light on this question by analyzing responses to the 2010 Kabul Bank crisis, one of the largest banking failures in the world, which revealed corrupt links between high-ranking Afghanistan public officials and the largest Afghan private lender. Within days, the scandal triggered widespread bank runs and the largest government bailout in the country’s history. The scandal unfolded three weeks before the 2010 parliamentary election and, in a bit of providential coincidence, the scandal also occurred midway through the collection of a nationwide survey, which included questions about corruption in government, voter preferences, and the efficacy of government institutions.
The timing of the survey, along with a fixed sampling that was randomized within districts, allowed the authors to adopt a novel quasi-experimental approach when analyzing the results. The authors reveal the following key findings:
- Overall, while individuals interviewed after the scandal broke were no more or less likely to think that corruption in government was a serious problem, the informational shock did cause a statistically and substantively significant decrease in citizens’ intention to vote in the parliamentary election scheduled two weeks later.
- However, the authors also find that in areas with low political efficacy, that is, where citizens are skeptical of their ability to influence political reform, news of the scandal did not affect these individuals’ assessment of corruption being a serious problem in government, but the news did make them less likely to intend to vote in the parliamentary election several weeks later.
- In contrast, in areas with relatively high levels of self-reported political efficacy, the authors find a mobilizing effect from information about corruption on voter turnout: In this case, the unfolding bank scandal had a sizeable, positive, and highly statistically significant effect on respondents’ intention to vote.
While the authors are careful not to lend a causal interpretation to their observed heterogeneous effects, their findings do suggest that political efficacy likely plays an important role in shaping how voters mobilize in the wake of an unexpected corruption scandal. Regardless of what explains variation in the ebb and flow of political efficacy across and within countries, this work suggests that citizens will react differently to information about corruption because of political efficacy.
In the decade following the financial crisis of 2008, investment funds in corporate bond markets became prominent market players and generated concerns of financial fragility. Figure 1 demonstrates the dramatic growth of their assets under management relative to the size of the corporate bond market since the 2008-2009 crisis. Increased bank regulation has pushed some of the activities from banks to non-bank intermediaries, heightening fears among regulators. Just in 2019, Mark Carney, the governor of the Bank of England, warned that investment funds that include illiquid assets but allow investors to take out their money whenever they like were “built on a lie” and could pose a big risk to the financial sector. However, despite these concerns, the last decade did not feature major stress events to test the resilience of corporate-bond investment funds. Hence, there is a dearth of systematic evidence on their resilience in large-stress events.
The authors address this gap by analyzing recent events around the COVID-19 crisis, which provide an opportunity to inspect the resilience of these important non-bank financial intermediaries in a major stress event and the unprecedented policy actions that followed it. The COVID-19 crisis unfolded quickly around the world in early 2020. Initial declaration of a public health emergency was made January 31, with reports of confirmed infections intensifying in March. On March 13, a national emergency at the federal level in the United States was declared. Financial markets tumbled as these events took place, with corporate bond markets in particular experiencing severe stress amid major liquidity problems.
The Federal Reserve responded aggressively with a March 23 announcement of the Primary Market Corporate Credit Facility (PMCCF) and Secondary Market Corporate Credit Facility (SMCCF), which were designed to purchase $300 billion of investment-grade corporate bonds. On April 9, the Fed announced the expansion of these programs to a total of $850 billion and an extension of coverage to some high-yield bonds. These facilities were unprecedented in the history of the Fed. As such, their announcements had a major impact on corporate-bond markets. Spreads for both investment-grade and high-yield rated corporate bonds, which almost tripled relative to their pre-pandemic level by March 23, reversed after the two policy announcements.
This recent episode allowed the authors to empirically investigate two important and related questions: How fragile were these corporate bond funds and how effective were the Fed’s actions in contributing to a resolution? Using daily data on flows into and out of mutual funds in corporate bond markets during the crisis allowed the authors to shed light on the determinants of flows across different funds, and thus to better understand the sources of fragility and what actions mitigated that instability. In summary, they highlight three main sources of fragility: asset illiquidity, vulnerability to fire-sales, and sector exposure.
The authors then show that the Fed bond purchase program helped to mitigate fragility by providing a liquidity backstop for their bond holdings. In turn, the Fed bond purchase program had spillover effects, stimulating primary market bond issuance by firms whose outstanding bonds were held by the impacted funds, and stabilizing peer funds whose bond holdings overlapped with those of the impacted funds. This analysis uncovers a novel transmission channel of unconventional monetary policy via non-bank financial institutions, which carries important policy lessons for how the Fed bond purchases transmit to the real economy.
The authors caution that massive Fed intervention in the market will likely not become the norm and, likewise, some of the structural fragilities in the way investment funds operate in illiquid markets must be addressed more directly.
The Covid-19 pandemic forced a dramatic rush to work from home (WFH) in early 2020. Even if only a fraction of this global shift became permanent, it would have implications for urban design, infrastructure development, and reallocation of investment from inner cities to residential areas. Of course, it would also have significant implications for how businesses organize and manage their workforces.
There is significant debate about the effectiveness of WFH, including how much further we can improve implementation, and the extent to which firms will continue the practice. Initial experiences led to optimism, but many firms are starting to question the sustainability of extensive WFH. One of the most important questions in this context is how WFH affects productivity.
This paper provides an analysis of the effects of the switch to WFH in a large Asian IT services company that abruptly switched all employees to WFH in March 2020. This study has several novel features, including a rich dataset for a sample of more than 10,000 employees for 17 months before and during WFH. The data include information on productivity, hours worked and how that time was allocated, and the employee’s contacts with colleagues inside and outside the firm. In addition, it includes an estimate of the employee’s commute time when they had worked at the office, and how many children (if any) they have at home.
The key measures are based on relatively objective measures of work time and the employee’s output, which were collected from the firm’s workforce analytics systems. The company has a highly developed process for setting goals and tracking progress, culminating in a primary output measure for each employee. The data also include information on hours worked, the authors’ primary input measure. Productivity is measured as output divided by hours worked. Most prior studies of WFH were based on survey data, so this is an unusual opportunity to study employee performance using the measures that the firm employs.
These data also include (for a subset of employees) time allocation for various activities, including meetings, collaboration, and time focused on performing work without distractions. It also includes information on networking activities (contacts) with colleagues inside and outside the firm, as well as various employee characteristics.
Of note, most employees at this company are highly skilled professionals in an IT company where nearly all are college educated. The jobs involve significant cognitive work, developing new software or hardware applications or solutions, collaborating with teams of professionals, working with clients, and engaging in innovation and continuous improvement. These job characteristics may present significant challenges to effective WFH. By contrast, previous studies of WFH productivity either used self-reported measures of productivity or focused on occupations where workers have relatively simple and repetitive tasks, often follow scripts, and work independently, such as call center workers.
Finally, the data allowed them to compare outcomes for the same employee before and during WFH. The authors find the following:
- Employees significantly increased total hours worked, by about 30%, during WFH. Much of this increase came from working outside of normal office hours.
- Despite the disruption due to the pandemic and shift to WFH, there was no significant change in measured output (the primary evaluation metric for each employee). In other words, employees continued to meet their goals, which were not changed after the switch to WFH.
- Given their results on work time and output, the authors estimate that productivity declined considerably, about 20%. These results are consistent with employees becoming less productive during WFH and working longer hours to compensate.
Why did productivity decline? The authors find that employees spent more time engaged in various types of formal and informal meetings during WFH, especially video conferences. Likewise, they spent substantially less time working without interruption. They also spent less time networking (both within the firm and with clients), and less time receiving coaching or 1:1 meetings with supervisors. These findings suggest that increased coordination costs during WFH at least partially explain the drop in productivity.
The authors also found that the productivity of women was more negatively affected by WFH than men. However, this gender difference was not due to the presence of children in the home. Rather, the likely culprit is other demands placed on women in the domestic setting. Employees with children at home increased working hours significantly more than those who did not have children at home, accounting for a greater decrease in productivity.
Among other considerations, these and other findings suggest that communication, coordination, and collaboration are hampered under WFH, and employers should not underestimate the value of networking and uninterrupted work time on employee productivity.
Understanding how wartime casualties influence public support for withdrawal and which mechanisms underlie this relationship remains an important challenge, especially in the context of conflicts fought through military coalitions. In these coalitions, the political costs of losses can induce free-riding, where some coalition partners limit the combat operations of their troops—under-providing security in areas of operation—to avoid political backlash at home.
The authors study these and other dynamics in a highly relevant context—the ongoing military campaign in Afghanistan—where North Atlantic Treaty Organization (NATO) affiliated forces have conducted operations since 2001. The authors employ granular, nationally representative individual-level public opinion survey data collected across eight major troop-sending NATO countries from 2007-2011, including the United States, United Kingdom, and other key troop-contributing coalition partners. These surveys cover a critical phase of NATO operations in Afghanistan, including the troop surge.
The authors identify combat events involving casualties of a troop-sending nation around the interview date specific to each respondent and specific to the nationality of the respondent. Using a series of quasi-experimental designs, the authors provide novel and compelling causal evidence linking battlefield losses to public demand for withdrawal in troop-sending countries and demonstrate the role of media coverage in shaping civilian attitudes toward the war. Specifically, they show that country-specific casualty events are associated with a significant worsening of public support for continued engagement in the conflict.
To assess this finding, the authors take advantage of the otherwise exogenous timing of prominent events that crowd out coverage of troop fatalities. In other words, if other news events—in this case, major sporting matches—exert news pressure such that war coverage is likewise diminished, would this alter public opinion about the war in meaningful ways? The answer is yes. The authors find compelling evidence that the elasticity of conflict coverage on own-country casualties diminishes significantly when sporting events introduce news pressure. They also find that public support for the war is unaffected by own-country casualties when news coverage has been crowded out by sporting matches.
Bottom line: the authors provide credibly causal evidence that public demands for withdrawal increase with war-related casualties and demonstrate that media coverage is likely a central driver of changes in sentiment. These results are important and relevant in understanding the economics of conflict and the policy implications of battlefield dynamics. When democratic countries participate in a foreign military intervention, public support for the war is a key constraint, to which multilateral military interventions may be particularly sensitive.
Governments around the world have deployed numerous policy instruments to control the spread of COVID-19, with some instruments, such as large-scale lockdowns, causing significant economic harm. These costs have been especially pronounced in developing countries, where economic slowdowns associated with COVID-19 policies combined with weak social safety nets were expected to push between 71–100 million people into extreme poverty in 2020.
Domestic travel bans are a particularly severe and relatively common restriction. Motivated in part by simulation exercises that model them as effective methods for reducing the spread of disease, they also impose substantial and inequitable economic costs, which make them difficult to sustain indefinitely. As a result, these policy instruments necessarily involve two decisions: (i) whether to restrict freedom of movement and (ii) for how long to do so.
To examine these decisions, the authors focus on domestic travel bans implemented by developing countries, which are frequently characterized by the presence of large populations of migrant workers. A United Nations report that examines data from 70 countries and more than 70% of the global population found that more than 763 million people were living within their home country but outside their region of birth in 2005. In addition, the rural-to-urban migration most affected by COVID-19 mobility restrictions is more common in developing countries than in the developed world, and the presence of a large population that may respond to economic shocks by moving has motivated many developing countries to utilize travel bans to prevent the spread of disease.
For this work, the authors estimate the impact of travel ban duration on the spread of COVID-19 by simulating disease transmission using a standard model that mimics a real-world scenario facing many developing countries, in which migrants leaving an urban hotspot spread infections to a rural destination. The results from this modeling exercise generates their key hypothesis: that the impact of travel bans is nonlinear in duration.
To test this finding empirically, they examine a natural experiment in Mumbai, India—the country’s financial capital and initial COVID-19 epicenter—which relaxed travel bans after varying durations. On March 25th, the country imposed a nationwide lockdown, maintaining a ban on domestic travel out of the city, causing immense suffering as the economy rapidly contracted and unemployment rose, especially among migrant workers, who do not have access to the social safety net in India. Under intense pressure, the government allowed the first wave of migrants to return to homes outside Mumbai’s state of Maharashtra on May 8. Phase 2 migrants, returning to districts in the Mumbai Metropolitan Area, were allowed to leave on June 5, and Phase 3 migrants, departing to all other destinations, were able to leave on August 20. Finally, the authors used cross-country data to examine travel bans in Indonesia, India, South Africa, the Philippines, China, and Kenya. Together, these countries comprise roughly 40% of the global population.
The authors’ model and empirical results are in agreement about domestic travel bans: relatively short and relatively long restrictions can successfully limit the spread of COVID-19; however, intermediate length bans—once lifted—can significantly increase COVID-19 growth rates, cumulative infections, and deaths. The full effect of travel bans can therefore only be quantified after they are lifted. More broadly, these results underscore that quantifying the unintended consequences of COVID-19 restrictions, including both disease and economic costs, is critical for policy decisions.
Why do individuals join armed groups? Research has pointed to several causes, including profit motives for gang members, economic incentives for those involved in civil conflicts, and nonmaterial motives such as intrinsic motivations that can be fueled, for example, by the desire for revenge, say, when a family member is killed by another group.
Economists have recognized the importance of nonmaterial motives for civil conflict. However, there is no empirical evidence in economics about the importance of intrinsic motivation for armed group recruitment, except through self-reported narratives. This paper attempts to settle this debate and demonstrate how nonmaterial motives form by providing evidence for the formation, and effects, of intrinsic preferences to join armed groups, in eastern Democratic Republic of the Congo (DRC), where about 120 nonstate armed groups operate in eastern DRC, some of which are considered foreigners, and where numerous local militias have formed to oppose these many groups.
The authors assembled a yearly panel dataset on the occupational choices and household histories of 1,537 households from 239 municipalities, and the violence perpetrated by armed actors on those households, dating back to 1990. They measured exposure to attacks on households and participation into armed groups using household histories. In other words, because of the specific context of the study, and approaches to minimize concerns of misreporting, participation histories could be reconstructed. The authors’ main analysis exploits variation in exposure to foreign armed group attacks across and within households over time.
Employing a many-layered methodology to, among other factors, isolate the causal effect of an attack by foreign armed groups, the authors find that if a household has been attacked by a foreign armed group, the probability that the individual in such a household participates in a Congolese militia is 2.55 pp (2.36 times) larger in each subsequent year. This effect is so large that it drives the effect of attacks by any armed group on participation into any armed group.
To assess the conditions of external validity of this result, the authors examine heterogeneous effects during years in which state forces are present, or absent, from the villages in which individuals participate in armed groups. They find that the baseline estimate is entirely driven by years in which state forces are absent. Using plausibly exogenous variation in the presence of state forces, they conclude that exposure to attacks by household members leads to the forging of preferences for joining militias, but that those preferences are only expressed in actually joining in years in which the state forces are absent to repress them.
The authors find that the main effect is consistent with the formation of preferences arising from parochial altruism towards family members, and rule out leading alternative causal channels that could explain their baseline estimate. The effect of victimization on participation is so large that it would take a prohibitive increase in income outside armed groups to undo it—a permanent 18.2-fold increase in yearly per capita income.
In sum, this paper provides evidence for the forging of rebels by illustrating that violent popular movements form from the interaction of intrinsic motivation to take arms, as well as state weakness. The effect of victimization on participation is so large that it would take a prohibitive increase in income to undo it. The results suggest that violations perpetrated by foreign armed groups generate among the relatives of the victims a desire—and possibly a moral conviction—to fight back. This work also provides the first-of-its-kind evidence for the forging of rebels through the forging of preferences and shows that nonmaterial motives can explain a high-stake conflict and a high-stake developmental outcome.
Assortative mating, or who marries whom, fundamentally shapes our society, as it determines the joint attributes of married couples. Recent descriptive studies raise the question of why college graduates are so likely to marry someone within their own institution or field of study. Explanations include pure selection, whereby individuals may match on traits correlated with choice of college field or institution, or causation, where the choice of college education causally impacts whether and whom one marries, and which can operate through a number of channels, including search frictions or preferences for spousal education.
Sorting out these explanations is central both to gauge the socio-economic consequences of college education and to understand how education policy and college admission criteria may influence outcomes in the marriage market. Furthermore, evidence that individuals match with the same education types primarily because of search frictions as opposed to preferences would suggest that marriage markets are much more local than typically modeled or described by economists. This research analyzes these explanations and, by doing so, examines the role of colleges as marriage markets.
The context of the authors’ study is Norway’s postsecondary education system. The centralized admission process and the rich nationwide data allow them to observe not only people’s choice of college education (institution and field) and workplace, but also if and who they marry (or cohabit with), and to credibly study effects of college enrollment. The authors find the following:
- The type of postsecondary education is empirically important in explaining whom but not whether one marries.
- Enrolling in a particular institution makes it much more likely to marry someone from that institution. These effects are especially large if individuals overlapped in college, are sizable even for those who studied a different field and are not driven by geography.
- Enrolling in a particular field increases the chances of marrying someone within the field but only insofar as the individuals attended the same institution. Enrolling in a field makes it no more likely to marry someone from other institutions with the same field.
- The effects of enrollment on educational homogamy (or marriage between people from similar backgrounds) and assortativity vary systematically across fields and institutions, and tend to larger in more selective and higher paying fields and institutions.
- Only a small part of the effect of enrollment on educational homogamy can be attributed to matches within the same workplace.
- Lastly, the effects on the probability of marrying someone within their institution and field vary systematically with cohort-to-cohort variation in sex ratios within institutions and fields. This finding is at odds with the assumption in canonical matching models of large and frictionless marriage markets.
Taken together, these findings suggests that colleges are effectively local marriage markets, mattering greatly for the whom one marries, not because of the pre-determined traits of the students that are admitted but as a direct result of attending a particular institution at a given time.
COVID-19 triggered a mass social experiment in working from home (WFH). Americans, for example, supplied roughly half of paid work hours from home between April and December 2020, as compared to 5 percent before the pandemic. Will this phenomenon continue after the pandemic ends?
To answer this question and to gauge other post-pandemic effects, the authors employed multiple waves of data from an original cross-sectional survey design that they have fielded about once a month since May 2020, and which includes 27,500 responses from working-age Americans. Their findings include the following:
- Employers plan for workers to supply 20.5 percent of full workdays from home after the pandemic ends. Roughly speaking, WFH is feasible for half of employees, and the typical plan for that half involves two workdays per week at home. Business leaders often mention concerns around workplace culture, motivation, and innovation as important reasons to bring workers onsite three or more days per week, while acknowledging net WFH benefits for one or two days per week.
- Most workers welcome the option to work remotely one or more days per week, according to our data, with respondents willing to accept pay cuts of 8 percent, on average, for the option to work from home two or three days per week after the pandemic. WFH desires are pervasive across groups defined by age, education, gender, earnings, and family circumstances. The actual incidence of WFH rises steeply with education and earnings.
- The extent of WFH in the post-pandemic economy is four times its pre-pandemic level, but only two-fifths of its average level during the pandemic. This implies a partial reversal of the massive COVID-induced surge in WFH. The reversal mostly involves adjustments on the intensive margin, whereby many persons WFH five days per week during the pandemic will shift to two or three days per week after it ends.
These shifts in work patterns will have important consequences. For example, high-income workers, especially, will enjoy large benefits from greater remote work. Also, spending in major city centers will fall by 5-10 percent or more relative to pre-pandemic levels. Finally, the authors’ data on employer plans and the relative productivity of WFH imply a 6 percent productivity boost in the post-pandemic economy due to re-optimized working arrangements. Less than one-fifth of this productivity gain will show up in conventional productivity measures, because they do not capture gains from less commuting.
Public works programs are often used to address the social challenges of unemployment, underemployment, and poverty by offering temporary employment for the creation of public goods, such as roads or infrastructure. Such workfare programs have theoretical advantages over cash-transfer programs, including provision to more disadvantaged recipients who would self-identify because of their willingness to work, as well as potential long-run benefits that accrue via work experience.
To assess the practical effects of these theoretical promises, the authors study labor-intensive public works programs in Sub-Saharan Africa that were adopted in response to such shocks as economic downturns, climatic shocks, or episodes of violent conflicts, and that offer public employment as a stabilization instrument. In doing so, the authors make two important contributions: They analyze both the contemporaneous and post-program impacts of a randomized public work program on participants’ employment, earnings and behaviors; and they leverage machine learning techniques to study the heterogeneity of program impacts, which is key to assessing whether departing from self-targeting would improve program effectiveness.
This second contribution is key because it suggests that improvements in self-targeting or targeting are first-order program design questions. Given the estimated distribution of individual program impacts, the authors show that a lower offered wage (and the subsequent change in self-targeting) was unlikely to improve program performance. In contrast, a range of practical targeting mechanisms perform as well as the machine learning benchmark, leading to stronger impacts during the program without reductions in post-program impacts.
The authors examine a program implemented by the Côte d’Ivoire government in the aftermath of a post-electoral crisis in 2010/2011. Funded by an emergency loan from the World Bank, the stated objective was to improve access to temporary employment opportunities among low-skilled, young (18-30) men and women in urban or semi-urban areas who were unemployed or underemployed, as well as to develop their skills through work experience and complementary training. Participants were remunerated at the statutory minimum daily wage.
All young men and women in the required age range and residing in one of 16 urban localities in Côte d’Ivoire were eligible to apply to the program. Because the number of applicants outstripped supply in each locality, fair access was based on a public lottery, allowing for a robust causal evaluation of the impacts of the program. In addition, randomized subsets of participants were also offered such benefits as entrepreneur and job-search training. Surveys of the treatment and control groups occurred at baseline, during the program (4 to 5 months after the program had started), and 12 to 15 months after the program ended.
The authors’ findings include the following:
- Impacts on employment are limited to shifts in the composition of employment towards the public works wage jobs during the program, with no lasting post-program impacts on the likelihood or composition of employment.
- Public works increase earnings during the program, but post-program impacts on earnings are limited.
- Savings and psychological well-being improve both during and (to a lesser extent) post-program. However, the authors find no long-lasting effects on work habits and behaviors, despite improvements during the program.
Finally, impacts on earnings remain substantially below program costs even under improved targeting. All things considered, should public work programs be deprioritized in favor of welfare programs with more efficient targeting procedures and lower implementation costs? Not necessarily. The authors stress that their analysis does not take into account all possible benefits of the program, both for the beneficiaries themselves but also for non-beneficiaries. For example, they observe lasting effects on psychological well-being and savings among beneficiaries that are not included in the cost-benefit ratios; they acknowledge the likelihood of other positive externalities associated with the program, such as a reduction in crime or illegal activities due to an incapacitation effect; and the authors do not quantify the societal value of the upgraded infrastructure.
What drives big moves in national stock markets? The benchmark view in economics and finance holds that stock price changes reflect rational responses to news about discount rates and corporate earnings, which suggests that big daily moves are accompanied by readily identifiable developments that affect discount rates and anticipated profitability. Another view, first introduced by Keynes in1936, suggests that investors price stocks based not on their opinions about fundamental values but on their opinions about what others think about stock values.
In either case, though, these forces are described in contemporaneous news accounts, according to the authors, and they employ such accounts to distill information about what triggers big moves in national stock markets. The authors examine next-day newspaper accounts of large daily jumps in 16 national stock markets to assess their proximate cause, clarity as to cause, and the geographic source of the market-moving news. Their sample of 6,200 market jumps yields several findings:
- Policy news, mainly that associated with monetary policy and government spending, triggers a greater share of upward than downward jumps in all countries.
- The policy share of upward jumps is inversely related to stock market performance in the preceding three months. This pattern strengthens in the postwar period.
- Market volatility is much lower after jumps triggered by monetary policy news than after other jumps, unconditionally and conditional on past volatility and other controls.
- Greater clarity as to jump reason also foreshadows lower volatility. Clarity in this sense has trended upwards over the past century.
- Finally, and excluding US jumps, leading newspapers attribute one-third of jumps in their own national stock markets to developments that originate in or relate to the United States. The US role in this regard dwarfs that of Europe and China.
Regarding their final finding, the authors note that from 1980 to 2020, 32 percent of all jumps in non-US stock markets were triggered by news emanating from or about the United States. This assessment reflects the reportage in leading own-country newspapers about their national stock markets. Also, jumps in other countries attributed to China-related developments were rare before the mid 1990s but have become much more frequent in recent years.
Armed actors that move into a new territory have two broad choices: pillage and plunder to extract wealth, or enforce property rights and markets and, thus, extract wealth via various forms of taxation and fees. This paper examines why armed actors restrain their power to arbitrarily expropriate wealth.
To address this question, the authors analyzed the incentives to restrain from violence and arbitrary theft by an armed group in eastern Democratic Republic of the Congo (DRC), the Front de Liberation du Rwanda (FDLR). The FDLR is a foreign armed group created from former Rwandan armed forces and militia members that perpetrated the 1994 Rwandan genocide. Known as one of the most brutal among the 122 armed groups in eastern DRC today, the FDLR often engaged in violence, sexual violence, torture, and pillages. Yet, despite their tendency to use violence arbitrarily, by 2009 the FDLR had created state functions, collected taxes, and protected the villages they taxed in the eastern DRC. They created markets that they taxed, blocked villages to impose transit fees, and raised poll and mining taxes. Arbitrary violence was kept low.
In March 2009, a military operation consisting of 30,000 Congolese and UN soldiers, dismantled the FDLR and drove them from the villages, but were unable to permanently defeat them. FDLR forces regrouped in a nearby forest where the Congolese security was limited. Suddenly unable to tax the villages that they formerly controlled, the FDLR launched sporadic violent attacks to expropriate wealth from villagers.
Why did the FDLR originally use its power to perform state functions instead of arbitrary Expropriation? In addition to possibly caring for those under submission, the authors posit that the FDLR had secured a property right over revenues from theft over a long horizon, leading them to tax instead of arbitrarily expropriate villages, which could potentially destroy growth. They took a long-run view, in other words, and determined that there was more to gain from protection and extraction.
Indeed, employing an event study and differences-in-differences framework, this is precisely what the authors find: the ability to permanently steal disciplines the use of violence by armed actors and incentivizes state functions. The authors’ finding is contained in the words of an armed actor informant: “The bandit is only your friend if he gets something out of it.”
This work offers new insights into the economic logic of violence: the disciplining effect of the time horizon of stealing, and provides an explanation for the creation, or collapse, of state functions. This mechanism also offers a new description for how classic policies against crime can backfire. While some existing research shows that crackdowns can drive criminal activity to other locations, this work reveals how crackdowns can lead crime to switch to a socially costlier activity, in the same location, and reveals that armed actors’ stealing horizon protects civilians.
One of the notable trends in the US manufacturing sector in recent decades has been a pronounced increase in concentration and markups, with one key exception—the consumer-packaged goods (CPG) industry. Dominant national brands of the past half century have actually experienced falling sales and decreasing market shares at the hands of smaller CPG firms.
In 2018, 16,000 smaller CPG manufacturers accounted for 19% of all US CPG sales, an increase of 2 percentage points ($2 billion) over the previous year. That same year, the 16 largest CPG manufacturers accounted for 31% of CPG sales, down from 33% five years earlier. This rapid growth of smaller brands represents a striking, structural break in the historically high and persistent concentration of CPG categories and the dominance by large, national brands.
What accounts for this shift? Industry experts routinely point to a demand-side explanation, identifying the generation of Millennials—consumers born after 1980—as the leading cause of this decline in the sales of established brands, often citing surveys that reveal a preference for smaller brands among younger consumers. However, this theory lacks a mechanism for understanding why Millennials might form intrinsically different tastes from older generations.
This new research proposes an alternative idea. While placing their hypothesis within the context of existing consumption capital theory and maintaining the neoclassical assumption of stable tastes, the authors posit that generational differences in behavior reflect heterogeneity in the accumulation of consumption and brand capital. Older generations of consumers had already accumulated decades of consumption capital with established, national brands by the time that new craft and artisanal CPG products started to enter. In contrast, the younger Millennial generation of consumers often had access to both craft and established national brands as they started to form their shopping habits.
The authors look to the US beer industry to conduct an empirical test. They study the take-home segment in the US beer industry, one of the leading examples of an industry disrupted by the sudden emergence of craft brands, which grew from $10 billion to $29.3 billion between 2010 and 2019. Surveys indeed find a striking generational share gap with half (50%) of older Millennials (25-34) drinking craft beer, in contrast with 36% of US consumers overall. As with other CPGs, Millennials may value the perception of higher quality for craft beer.
The authors manually assembled a novel database from various industry sources that tracks the history of all the craft beer brands sold in the US, which allowed them to exploit the geographic differences in the timing and speed of diffusion of new craft beer brewers and local availability of craft beer. They also employ a national database containing the 2004-2018 purchase activity for a nationally representative shopping panel of over 100,000 U.S. households.
Among other findings, the authors show that 85.3% of the generational share gap is explained by consumption capital. Therefore, while Millennials buy craft beer at higher rates than older consumers, the differences in intrinsic preferences cannot account for the disruption to the market structure of established beer brands. Instead, generational differences in craft beer demand are mostly an artifact of generational differences in the historic availability of brands during early adulthood. Put another way, it is not so much that Millennial beer drinkers have different tastes than, say, Baby Boomers, it is more that Millennials were exposed to craft beers when they entered adulthood.
Importantly for the beer industry, this work suggests sustained growth in craft beer share, reaching almost 30% of the market by 2030, reflecting the changing composition of beer consumers as older generations die and a new generation of new adults—Generation Z—enters the market and forms beer preferences.
Economists and policymakers have long embraced the idea that high uncertainty induces households to spend less and firms to reduce investment and employment. However, recent research has shown that the empirical evidence on these channels is at best “suggestive” and that more work is needed to more clearly make this causal link.
This paper addresses this gap by employing randomized control trials in a new large cross-country survey of European households to induce exogenous variation in the macroeconomic uncertainty perceived by households, and to then study the causal effects of the resulting change in uncertainty on their spending relative to that of untreated households. This work is based on a new, population-representative survey of households in Europe implemented by the European Central Bank (ECB). The authors’ survey spans the six largest euro area countries and thousands of households.
The authors find that higher uncertainty leads to sharply reduced spending by households on both non-durables and services in subsequent months as well as on some durable and luxury goods and services. In short, the authors provide direct causal evidence that economists and policymakers can stop hedging their claims about the effect of high uncertainty on household and business spending decisions: Higher uncertainty makes households spend less on average.
Importantly, the authors find that this effect is economically large over a period of several months. In contrast, they find little effect of the first moment of expectations on household spending. A central challenge in the uncertainty literature has been separately identifying the effects of expectations about first and second moments, since most large uncertainty events are also associated with significant deteriorations in the expected economic outlook. The authors’ results suggest that, at least when it comes to households, it is uncertainty that is driving declines in spending rather than concerns about the expected path of the economy.
These declines in spending stemming from rising uncertainty mainly regard discretionary spending, such as health and personal care products and services, entertainment, holiday, and luxury goods. Spending is most affected by uncertainty for those individuals working in riskier sectors, as well as households whose investment portfolios are most exposed to risky financial assets. They also find that when individuals face higher uncertainty, they report that they would be less likely to allocate new financial investments to mutual funds or cryptocurrencies. On the other hand, they show that (exogenously induced) uncertainty does not influence household attitudes towards investing in real estate.
The views expressed in this paper are those of the authors and do not necessarily reflect the views of the European Central Bank or any other institution with which the authors are affiliated.
In recent decades, researchers in economics and finance have increasingly adopted experimental and quasi-experimental methods to study the effects of large-scale economic and financial shocks. These methods compare a group of firms or households that are directly exposed to a given shock to an unexposed control group, and which allow the researchers to estimate whether the shock caused any differences in outcomes between treated and control groups.
A shortcoming of these quasi-experimental methods is that they typically do not measure the total effect of a shock. Most studies exclusively estimate the effect of direct treatment, which captures only part of the total effect. The remaining part is driven by spillover effects from directly exposed firms and households to other firms. Firms and households do not experience business or financial shocks in a bubble, in other words, but rather in relation to other households or firms that may have not directly experienced the shock.
These spillovers operate through what economists call general equilibrium channels, including price and wage changes, agglomeration forces, and input-output networks. For instance, researchers interested in the effects of fiscal stimulus might compare firms that receive fiscal support to firms that do not. If stimulus causes directly exposed firms to increase hiring, wages in local labor markets might rise, which affects all firms in the region.
Estimating spillovers is key for researchers because it helps them understand which general equilibrium channels need to be included in economic models, and whether micro data estimates are informative about higher levels of aggregation. For example, consider the economic shocks and the policy responses of the Great Recession or the current pandemic. In such cases, many firms and households are simultaneously affected, so general equilibrium forces are likely large and operate through many different channels.
Huber’s contribution in this paper is threefold:
- First, he outlines how researchers can estimate spillovers operating among firms and households that are connected in some way, for example firms in the same region, sector, or network.
- Second, he highlights three issues that can introduce mechanical bias into spillover estimates: multiple types of spillovers, measurement error, and nonlinear effects. Or to put it simply: spillovers are complicated. For instance, spillover estimates are biased when researchers do not account for the fact that spillovers may operate simultaneously across multiple groups, such as when a shock to firms generates spillovers both onto firms in the same region and same sector.
- Third, Huber proposes practical solutions to these estimation challenges, such as instrumental variables, testing for heterogeneous effects, and flexible functional forms.
Building models that closely approximate reality is important for researchers as they try to determine the effects, in this case, of economic shocks and the policies prescribed to address them. By estimating spillovers directly, researchers can contribute to the development of realistic general equilibrium models and, thus, improve their understanding of the connection between micro data and aggregate outcomes. While seemingly abstract, these improvements in models can make important contributions to our understanding of how the economy works.
Ever since Gary Becker’s path-breaking 1957 work on discrimination, when he introduced the profession to a simple framework for racial bias and its effect on the outcomes of white and Black individuals, economists have built a variety of theoretical models that try to explain the existence of discrimination. In recent years, some researchers have taken a more empirical view of the matter and parsed rich administrative data to find evidence for discrimination in different settings. It is sometimes unclear, however, how this recent empirical literature relates to the classic theoretical framework of Becker and others.
In this new work, Peter Hull of UChicago’s Kenneth C. Griffin Dept. of Economics offers a reconciliation of these two literatures, developing a framework for understanding modern tests of decision-making in terms of racial bias. In doing so, Hull shows how modern empirical tests can detect different forms of bias, from canonical taste-based discrimination to inaccurate beliefs or stereotypes, and offers a new approach to distinguish between the two.
Imagine a judge who must decide which defendants to release on bail before trial, with defendants assigned effectively at random to different judges. A recent empirical literature uses such variation to compare the criminal misconduct outcomes of white and Black defendants who a judge is just indifferent to releasing. Inspired by the theory of Gary Becker, racial disparities “at the margin” of treatment may suggest “taste-based discrimination,” in which judges hold Black defendants to a different standard than perceivably equal white defendants. But more recent theory may suggest other explanations, such as that the judge is acting on biased beliefs about a defendant’s potential for criminal misconduct, or racial stereotypes.
It is theoretically possible, in other words, that a judge with different “marginal outcomes” for white and Black defendants harbors no racial animus, but makes systematic decision-making mistakes that favor white defendants. In practices, judges may base their decisions on inaccurate predictions of defendant misconduct risk after reviewing facts about the defendant’s background and prior criminal behavior and other factors. Are these “bad guesses” necessarily evidence of racial bias?
Hull finds that the answer to that question is “No.” Differences in decision-making at the margin can reject the possibility that a judge is basing decisions on accurate predictions of misconduct risk in a risk-neutral way. But this does not mean the judge is engaged in canonical taste-based discrimination. Instead, this finding from the “marginal outcome tests” in the recent empirical literature could be attributed to a judge’s biased beliefs or, more prosaically, their systematic mistakes in predicting whether individual defendants of different races will commit pre-trial crimes.
Hull then offers a new test to disentangle taste-based discrimination from mistaken judgment. This test relies not on the outcomes of white and Black defendants just at the margin of a judge’s decision, but how these marginal outcomes change as a judge becomes more or less lenient. Concretely, imagine that our judge has some sort of internal prediction of pretrial misconduct that she uses to rank white and Black individuals by her desire to release them before trial. If a defendant falls below some potentially race-specific threshold, the defendant is released before trial, while defendants with high misconduct predictions are detained. Currently, researchers look at the outcomes of individuals at these thresholds to determine whether or not a judge is racially biased.
Hull’s insight is to also consider how the misconduct outcomes change as that threshold point moves. In other words, are the judge’s bail decisions resulting in fewer or more crimes at the margin as she releases more or fewer defendants? Hull shows that if marginal outcomes always increase, by race, as more defendants are released then one cannot reject Becker’s classic model of taste-based discrimination. If, however, marginal outcomes do not increase with release rates then it is likely she is just making mistakes.
Importantly, Hull stresses that data can only reveal so much about a person’s intentions. Any conclusions that his test reveals about taste-based discrimination and biased beliefs in a judge’s pre-trial bail decisions, for example, reflect what can be said about the judge’s behavior from her actions, and not necessarily her “true” or intended behavior. The paper nevertheless argues that the results of these empirical tests can be very useful for policymaking.
This work unites classic theoretical framework of racial bias and recent empirical research in several settings, both within and outside of the pretrial setting and criminal justice as a whole. On one hand, Hull shows that existing marginal outcome tests have limits in detecting the canonical taste-based discrimination model of Gary Becker. On the other hand, he shows that a new test which more fully characterizes marginal outcomes can provide a more complete view of racial bias. The paper discusses how both tests can be applied in various settings, and summarizes directions for future empirical work.
Since the 1950s, US policymakers have treated unemployment insurance (UI) as a discretionary tool in business cycle stabilization, extending the generosity of benefits in recessions. This was particularly evident during the Great Recession, when benefit durations were raised almost four-fold at the depth of the downturn. While critics emphasized the costly supply-side effects of more generous UI, supporters pointed to potential stimulus benefits of transfers to the unemployed. These issues are pertinent again as policymakers debate the benefits of UI extensions during the COVID-19 pandemic.
Existing research misses the potential interactions between UI and aggregate demand. Most prior work has studied UI in partial equilibrium (which holds much of the economy constant), while analyses in general equilibrium have focused on environments without macroeconomic shocks or in which prices and wages adjust so quickly that they eliminate the effect of aggregate demand on the overall level of production.
This paper analyzes the output and employment effects of UI in a general equilibrium framework with macroeconomic shocks and nominal rigidities (when prices and wages are slow to change). Kekre finds that the effect of UI on aggregate demand makes it expansionary when monetary policy is constrained,
as during recent economic crises when nominal interest rates have been near zero. An increase in UI generosity raises aggregate demand through two key channels: by redistributing income to the unemployed, who have a higher marginal propensity to consume than the employed, and by reducing the need for all individuals to save for fear of becoming unemployed in the future. If monetary policy does not respond to the resulting demand stimulus by raising the nominal interest rate,
this raises equilibrium output and employment.
By calibrating his model to the U.S. economy during the Great Recession, Kekre reveals an important stabilization role of UI through these channels. He studies 13 shocks to UI duration associated with the Emergency Unemployment Compensation Act of 2008 and Extended Benefits program. With monetary policy and unemployment matching the data over 2008-2014, the observed extensions in UI duration had a contemporaneous output multiplier around or above 1. These effects are pronounced and would impact millions of people: At its peak, the unemployment rate would have been 0.5 percentage points higher absent these extensions.
In the wake of a series of tragic incidents in recent years, police reform has become a central societal concern. This new research documents how LAPD officers responded to police reforms, and focuses on three key dates: 1998, when the first reform was introduced, which triggered an internal investigation for every complaint; 2001, when the Department of Justice ordered better documentation and more timely compliance; and 2002, when reforms were weakened such that commanding officers could dismiss complaints deemed frivolous.
How does such a dynamic play out in the data? The arrest-to-crime rate fell enormously after the first oversight change: by 40 percent from 1998 to 2002 for all crimes (those with victims, known as Part 1, and victimless, Part 2), and by 29 percent for Part 1 crimes. When oversight was reversed in late 2002, arrest rates immediately increased and the rate for all crimes returned to its 1998 level by 2006. The Part 1 arrest rate reversed by half of the initial decline. Prendergast interprets these outcomes as evidence of “drive and wave” disengagement, and he cites contemporaneous officer reports that corroborate this description. Of note, there were no such changes in arrest rates for neighboring jurisdictions of the Los Angeles Sheriff’s Department over the same period.
To test or check his “drive and wave” hypothesis, Prendergast first looks at differences across crimes to see if officers appropriately respond and investigate. For Part 1 crimes, which have victims (say, a burglary or assault), officers are more inclined to respond, especially as these cases are typically called into a station, leaving a record. By contrast, Part 2 crimes, (like narcotics and prostitution) often rely on the officer witnessing the crime. In line with Prendergast’s “drive and wave” insight, narcotics arrests fall 44 percent from 1998 to 2001, and then increase by that amount afterwards.
By failing to investigate crimes in a way that led to arrests, police harmed the victims of those crimes. Prendergast argues that the oversight changes created an imbalance in which the voice of victims in police oversight was largely ignored. This observation offers implications for the current debate on police reform. In particular, it shows that enhancing oversight by suspects without strengthening the voice of victims may backfire.
To support economies hit by the pandemic, governments have implemented large fiscal stimulus programs, but these programs have come at a steep price. In 2020, the advanced economies on average created extra public debt equaling 20 percent of GDP, pushing average debt-to-GDP to heights not seen since WWII. These exceptional debt levels are raising questions about how governments will ultimately finance them. Will such high levels require countries to inflate away part of their debts? Will individuals raise their inflation expectations and fuel an inflationary cycle?
While theory suggests that fiscal considerations may play an important role in driving inflation expectations, little empirical evidence exists on the matter. To address this empirical gap, the authors use a large-scale survey of US households that assesses whether households’ inflation expectations react to certain financial information. Some information relates to current levels of deficits and debt whereas other information focuses on projected levels of debt in the future.
The authors find that current levels of deficits and debt have essentially no effect on inflation expectations of households, nor does such information affect their expectations of the fiscal outlook. However, providing households with information about public debt expectations in a decade has more pronounced effects, summarized here:
First, households incorporate this information into their outlook and raise their expectations about future debt levels.
Second, they seem to assume that much of the rising debt levels will come from higher spending on the part of the government.
Third, they anticipate higher inflation, both in the short-run and over the next decade, in response to this information.
These results suggest that households are able to distinguish between transitory fiscal changes and more permanent ones. Information about current fiscal levels does not seem to affect their broader outlook about the fiscal situation, including for future interest rates and inflation. But information about future changes in public debt, perhaps because they are indicative of more permanent changes in the fiscal outlook, leads households to anticipate some monetization of the debt.
This work offers important insights for current policymakers. Most households do not perceive current high deficits or current debt as inflationary, nor as being indicative of significant changes in the fiscal outlook. However, a persistently worsening fiscal outlook, with rising debt levels into the future, does seem to have a more powerful effect on expectations, including inducing households to expect some monetization of the future debt.
College students often seek information from business professionals about career choices that those professionals have made. Research has revealed that these informal exchanges are important, as they can alter students’ career expectations and choices. However, do all college students receive similar responses? This working paper is a first-of-its-kind exploration into whether student gender causally affects the information that students receive regarding various career paths.
The authors implemented a large-scale field experiment wherein undergraduate students interested in learning about various careers sent messages via an online professional platform. The messages, sent by students to 10,000 randomized recipients, asked preformulated questions seeking information about the professional’s career path. Four templates, based on university career center guidance, were used to test a specific hypothesis regarding whether gender influenced the type of information received by a student. The authors focused on two career attributes—work/life balance and competitive culture—both of which differentially affect the labor market choices of women.
The authors’ main finding is that gender was a key determinant of the type of information that professionals provided to students regarding work/life balance. In response to the broad question about the pros/cons of the professional’s field, the text of the responses reveals substantial gender disparities. Professionals are more than two times as likely to provide information on work/life balance issues to female students relative to male students.
Further, when students ask specifically about work/life balance, female students receive 28 percent more responses than male students. This means that the differential emphasis on work/life balance to female students in responses to the broad question is not entirely driven by perceptions that female students care more about this issue. Interestingly, there is no differential emphasis on workplace culture to female students.
These different answers to male and female students matter: The vast majority of these mentions of work/life balance are negative and increase students’ concern about this issue. At the end of the study, female students report being more deterred than male students from their preferred career path, and this is partly explained by the greater emphasis on work/life balance to female students.
Private equity (PE) has played an increasing role in health care management in recent years, with total investment increasing from less than $5 billion in 2000 to more than $100 billion in 2018. PE-owned firms provide the staffing for more than one-third of emergency rooms, own large hospital and nursing home chains, and are rapidly expanding ownership of physician practices. This role has raised questions about health care performance as PE-owned firms may have incentives more aligned with firm value than with consumer welfare.
This work focuses on PE and US nursing homes, a sector with spending at $166 billion in 2017 and projected to grow to $240 billion by 2025. Nursing homes have historically had a high rate of for-profit ownership (about 70%), allowing the authors to study the effects of PE ownership relative to for-profit ownership more generally. Also, PE firms have acquired both large chains and independent facilities, enabling the authors to make progress in isolating the effects of PE ownership from the related phenomenon of corporatization in medical care.
The authors employ patient- and facility-level administrative data from the Centers for Medicare & Medicaid Services (CMS), which they match to PE deal data to observe about 7.4 million unique Medicare patients. The data include 18,485 unique nursing homes between 2000 and 2017. Of these, 1,674 were acquired by PE firms in 128 unique deals. Their findings include the following:
- Going to a PE-owned nursing home increases the probability of death during the stay and the following 90 days by 1.7 percentage points, about 10% of the mean. This estimate implies about 20,150 Medicare lives lost due to PE ownership of nursing homes during the authors’ sample period.
- The authors estimate a corresponding implied loss in life-years of 160,000. Using a conventional value of a life-year from the literature, this estimate implies a mortality cost of about $21 billion in 2016 dollars, or about twice the total payments made by Medicare to PE facilities during our sample period, about $9 billion.
- The total amount billed for both the stay and the 90 days following the stay increases by about 11%.
- Nurse availability per patient declines, and there is an increase in operating costs that tend to drive profits for PE funds.
- Finally, attending a PE-owned nursing home increases the probability of receiving antipsychotic medications—discouraged in the elderly due to their association with greater mortality—by 50%. Similarly, patient mobility declines and pain intensity increases post-acquisition.
The authors acknowledge that although their results imply that PE ownership reduces productivity of nursing homes, such ownership may have more positive effects in other sectors of healthcare with better functioning markets. Further work is needed to determine how government programs can be redesigned to align the interests of PE-owned firms with those of taxpayers and consumers.
What are the private and social costs and benefits of electric vehicles (EVs)? Data limitations have hindered policymakers’ ability to answer those questions and guide transportation electrification. Most EV charging occurs at home, where it is difficult to distinguish from other end uses, meaning published estimates of residential EV load are either survey-based or extrapolated from a small, unrepresentative sample of households with dedicated EV meters.
These data are important because if EVs are driven as much as conventional cars, it speaks to their potential as a near-perfect substitute to vehicles burning fossil fuels. If, on the other hand, EVs are driven substantially less than conventional cars, it raises key questions about their replacement potential.
This research presents the first at-scale estimates of residential EV charging load in California, home to approximately half of the EVs in the United States. The authors employ a sample of roughly 10 percent of residential electricity meters in the largest utility territory, Pacific Gas & Electric, which they merge with address-level data on EV registration records from 2014-2017. The authors’ findings include:
- EV load in California is surprisingly low. Adopting an EV increases household electricity consumption by 0.12 kilowatt-hours (kWh) per hour, or 2.9 kWh per day. Given the fleet of EVs in their sample and correcting for the share of out-of-home charging, this translates to approximately 5,300 electric vehicle miles traveled (eVMT) per year.
- These estimates are roughly half as large as official EV driving estimates used in regulatory proceedings, likely reflecting selection bias in official estimates, which are extrapolated from a very small number of households.
- Importantly, these findings indicate that EVs are driven substantially less than internal combustion engine vehicles, suggesting that EVs may not be as easily substituted for gasoline vehicles as previously thought.
This work is an important step in determining EV utilization rates, and the authors map out future research efforts that include, among other questions, issues relating to the marginal utility of EV transportation, such as limited charging stations; the degree to which EVs complement rather than replace conventional vehicles; and the impact of high electricity prices in California.
Much of income redistribution generated by the US tax system occurs through large tax credits paid out in annual tax refunds, such as the Earned Income Tax Credit (EITC) and Child Tax Credit (CTC). These credits are a substantial portion of income for many recipients, but complexity may lead to uncertainty about tax liability or refund status, even after other income-related uncertainty is resolved.
The authors employ novel survey data about tax filer beliefs to find the following:
- There is substantial tax-refund uncertainty among low-income filers, and among EITC recipients in particular.
- Despite considerable uncertainty, filers’ expectations are often correct, and they seem to update their beliefs from year to year in response to new information.
- Uncertainty may stem from more complex features of the tax code, such as the phase-in and phase-out regions for tax-based transfer programs or rules for married tax filers.
- Finally, refund uncertainty distorts individuals’ consumption-savings choices and is large enough to cause welfare losses among EITC filers on the order of 10 percent of the value of the EITC.
These are important insights for policymakers, but the authors acknowledge that more work is needed to better understand the underlying mechanisms that influence low-income tax filers. For example, a better understanding of why households fail to resolve uncertainty could inform the design of tax simplification policies, and could help predict behavioral responses to, and welfare consequences of, other tax reforms. Tax-related uncertainty may also affect other economic decisions, such as whether and how much to work.
Discrimination against Arab-Muslims in the United States, including violence and hate speech, has grown substantially over the past five years. But there is hope, and it lies with more contact between Arab-Muslims and non-Muslim Whites, not less. This new research studies the effect of decades-long exposure to local Arab-Muslim communities on non-Muslim Whites’ attitudes and behaviors, using a strategy based on immigration “pull” and “push” factors to isolate a causal effect rather than a simple correlation.
The authors combine three cross-county datasets, individualized donations data from two large charity organizations, and a recent large-scale custom survey to show that:
- Long-term exposure leads to more positive attitudes. Non-Muslim Whites who reside in US counties with (exogenously) larger populations of Arab ancestry are less explicitly and implicitly prejudiced against Arab-Muslims.
- These effects carry over into measures of political preferences: non-Muslim Whites in these same counties were more opposed to the 2017 “Muslim Ban” and less likely to vote for Donald Trump in 2016.
- Individuals in these counties are more likely to donate, and donate larger sums, to charitable causes in Arab countries.
- Finally, individuals in these counties are more likely to have an Arab-Muslim friend, neighbor, or workplace acquaintance, less likely to hold negative beliefs about Islam, and more knowledgeable about Arab-Muslims and Islam in general.
The authors then take their analysis one step further, showing that these effects are not unique to Arab-Muslims: decades-long exposure to any given foreign ancestry increases generosity toward that ancestral group. Their results provide compelling evidence on the importance of diversity: increasing contact between different groups in natural settings can pay long-run dividends by promoting tolerance, social cohesion, and pluralism.
Personal digital devices generate streams of detailed data about human behavior. Their temporal frequency, geographic precision, and novel content offer social scientists opportunities to investigate new dimensions of economic activity.
The authors find that smartphone data cover a significant fraction of the US population and are broadly representative of the general population in terms of residential characteristics and movement patterns. They produce a location exposure index (“LEX”) that describes county-to-county movements and a device exposure index (“DEX”) that quantifies the exposure of devices to each other within venues. These indices track the evolution of intercounty travel and social contact from their sudden collapse in spring 2020 through their gradual, heterogeneous rises over the following months.
Importantly for researchers, the authors are publishing these indices each weekday in a public repository available to noncommercial users for research purposes. Their aim is to reduce entry costs for those using smartphone movement data for pandemic-related research. By creating publicly available indices defined by documented sample-selection criteria, the authors hope to ease the comparison and interpretation of results across studies.
More broadly, this work provides guidance on potential benefits and relevant caveats when using smartphone movement data for economic research. Researchers in economics and other fields are turning to smartphone movement data to investigate a great variety of social science questions, and the authors focus on the distinctive advantages of the data frequency and immediacy.
Animation: Four Wednesdays Before and Day of Insurrection
Notes: Figure shows origin and trajectories of mobile devices who visited the Capitol CBG, on the Wednesday of the storming of the Capitol and the 4 Wednesdays preceding. Orange dots indicate the lat-long coordinates of the origin CBGs of the devices, turquoise lines show their shortest-distance trajectory. All figures are produced with identical visualization settings (transparency of lines, etc.). Green boxes in last figure mark the location of Proud Boys chapters, a prominent far-right hate group, according to the Southern Poverty Law Center. They correspond to the lat-long coordinates of the centroids of the city the chapter is in.
The authors propose a method to better understand what triggers collective action, and they apply that methodology to the protest and subsequent violent attempt to undermine democratic norms and institutions that occurred on Jan. 6, 2021, in Washington, DC. The authors provide evidence that socio-political isolation, proximity to a prominent hate group, the Proud Boys, as well as the intensity of local misinformation posts on social media were robustly associated with participation in this event.
While existing work yields important insights about the conditions under which organized opposition emerges and what impact such opposition may have on the various institutions within which they are embedded, it tells us little about the individuals that participate in these behaviors. This is due, in large part, to data limitations: It is difficult to characterize those engaged in collective action in a rigorous and representative manner
This paper addresses that data gap through two central contributions:
- This work introduces an approach for estimating community-level participation in mass protest that leverages historical information about cell-phone device movement—anonymized and aggregated—to identify devices that visit places where protests or other types of collective action have occurred. The authors also characterize communities where the devices originate.
- The authors then apply this approach to the Jan. 6, 2021, rally, protest, and subsequent violent riot on the grounds of the United States Capitol building, the aim of which was to oppose or halt the official certification of the outcome of the November 2020 US presidential election. The authors’ methodology helps them address a key question: What are the conditions under which individuals may engage in such anti-democratic acts? The authors find that partisanship in the form of Trump support, socio-political isolation, proximity to local chapters of the hate group Proud Boys, as well as local engagement with online misinformation through the social-media platform Parler, explain variation in protest involvement.
Of all the challenges that poverty presents, one that is gaining increased attention from researchers is that poverty itself can have psychological effects that lead to decreased earning potential. Living in poverty—with the stresses and traumas that such a state causes—can negatively impact a person’s ability to work productively and earn a high wage.
To test this connection between poverty and productivity, the authors conduct a field experiment with 408 small-scale manufacturing workers in Odisha, India. The workers are employed full-time for a two-week contract job—a typical form of employment. These workers make disposable plates for restaurants, a physical yet cognitively demanding task for which payment is tied to output. The authors’ experiment is set during the lean season when people are typically strapped for cash. For example, at baseline, 71% of workers in their sample have outstanding loans, and 86% report having financial worries. Workers appear to carry their mental burdens to work. On a typical work day, roughly one in two workers reports worrying about finances while at work.
The experiment randomly varies the timing of income receipt so that some workers are paid sooner with an amount roughly equal to one month’s earnings. This large cash infusion appears to immediately reduce financial constraints: within three days, early-payment workers are 40 percentage points (222%) more likely to repay their loans. Only the timing of payment changes; the piece rate and all other aspects of the job are unchanged, meaning that short-term financial concerns are reduced without affecting overall wealth or financial incentives to work. This enables the authors to measure an immediate effect of cash-on-hand on productivity.
The major findings are as follows:
- Alleviating financial constraints boosts worker productivity. The day after receiving a cash infusion, workers are 0.12 standard deviations (SDs) more productive relative to the control group.
- These gains persist throughout the workday and for the remaining days of the treatment period.
- The gains are concentrated among more financially strained workers, measured both by assets and liquidity. Early payment increases productivity for these poorer workers by 0.22 SDs.
- Early payment also improves poorer workers’ attentiveness on the task, as measured by three different markers of inefficient production processes.
For policymakers, this work suggests that programs that reduce financial volatility or vulnerability for poor workers could increase their productivity in addition to improving their welfare.
The nature of business lending in an economy changes over a financial cycle, including the amount and type of debt that a borrower can take, as well as the role of banks and other lenders involved. Not only does this affect borrowing by firms, it also affects the capital structure of intermediaries. While much research has examined various aspects of lending, there is relatively little theory explaining how easy financing conditions might accentuate certain aspects over others. In this paper, the authors offer a theory explaining why and how the nature of lending changes with the environment in which lending takes place.
The authors’ model describes the various factors that affect outcomes, including exogenous factors like broad economic and financial conditions, and endogenous factors like improvements in firm governance. To summarize their main findings: Starting from a low level, higher prospective corporate liquidity will initially reduce monitored borrowing from a bank in favor of arm’s length borrowing, and eventually reduce the need for internal corporate governance to support corporate borrowing, leading to covenant-lite loans. In parallel, higher prospective corporate liquidity will allow both corporations and banks to operate with higher leverage.
Beyond these insights into financial intermediation, the authors’ work sheds light on the role of liquidity in diminishing the consequences of moral hazard over repayment, and hence the quality of the corporation’s internal governance. For example, internal governance matters little if the firm can potentially be seized and sold for full repayment in a chapter 11 bankruptcy, which happens in an environment with high levels of liquidity. Therefore, prospective liquidity encourages leverage at both the borrower and intermediary level, even while requiring less governance. Equivalently, because the intermediary performs fewer useful functions, high prospective liquidity encourages disintermediation.
Risky loans to highly leveraged borrowers, made by highly leveraged intermediaries, may therefore not be evidence of moral hazard or over-optimism, but may simply be a consequence of high prospective liquidity crowding out the monitoring role of financial intermediation. Such crowding out may have adverse consequences. As prospective liquidity fades and the demand for intermediation services expands again, the need for intermediary capital also increases. To the extent that intermediary capital is run down in periods when liquidity is expected to be plentiful, it may not be available in sufficient quantities when liquidity conditions turn and demand for capital ramps up. Prospective liquidity breeds a dependence on continued liquidity for debt enforcement as it crowds out other modes of enforcement, especially corporate governance. This will make debt returns more skewed – that is, enhance the possibility of very adverse outcomes along with good ones.
Outsourcing is fundamentally changing the nature of the labor market. During the last two decades, firms have increasingly contracted out a vast array of labor services, such as security guards, food, and janitorial services. While good for business, employees of contracting firms earn less than those working for traditional employers.
However, is that the whole story? To the extent that firms scale up more efficiently by contracting out certain activities, outsourcing generates aggregate output gains that may benefit all workers. Despite the prevalence of outsourcing in the labor market, there is little guidance to trace out its determinants and effects. Why do firms outsource? How can low-paying contractor firms
co-exist with high-paying traditional employers? How does outsourcing change aggregate production and its split between workers and firms?
To answer these questions, the authors employ theory, a general equilibrium model, and four sources of French data between 1996 and 2007 that include tax records reflecting firm and worker outcomes, firm surveys, and cross-border trade transactions to provide direct empirical support of the theory. The authors argue that it is useful to conceptualize firms’ outsourcing decisions in the context of frictional labor markets, which give rise to firm wage premia. More productive firms are then more likely to outsource, which raises output at the firm level. Labor service providers endogenously locate at the bottom of the job ladder, implying that outsourced workers receive lower wages. Together, these observations characterize the tension that outsourcing creates between productivity enhancements and redistribution away from workers.
This is confirmed by the authors’ findings:
- A reduced-form instrumental variable strategy confirms that, as firms grow, they spend relatively more on outsourced labor, and outsourcing further improves growth. However, outsourced workers also experience large wage drops.
- At the aggregate level, output rose by 1%, as the structural model reveals that labor was effectively reallocated to the most productive firms in the economy. However, these productive gains were unevenly distributed. Low-skill workers, who were particularly exposed to outsourcing, were increasingly employed at contractor firms who paid low wages.
- In addition, wages declined even at traditional employers because traditional employers faced weaker labor market competition for workers.
- Together, these results imply that the labor share declined by 3 percentage points, and aggregate labor income dropped by 2%.
What about those theoretical output gains that could benefit all workers? The authors find that outsourcing leads to some, though modest, positive productivity effects, and that these gains benefit firm owners and deteriorate workers’ labor market prospects.
Bottom line: outsourcing benefits firm owners and deteriorates workers’ prospects in the aggregate.
COVID-19 and policy responses to the pandemic have generated massive shifts in demand across businesses and industries. The authors draw on firm-level data in the Atlanta Fed/Chicago Booth/Stanford Survey of Business Uncertainty (SBU)1 to quantify the pace of reallocation across firms before and after the pandemic struck, to investigate what firm-level forecasts in December 2020 say about expected future sales, and to examine how industry-level employment trends relate to the capacity of employees to work from home.
The authors report three pieces of evidence on the persistent re-allocative effects of the COVID-19 shock:
- First, rates of excess job and sales reallocation over 24-month periods have risen sharply since the pandemic struck, especially for sales. The authors focus on rates of “excess” reallocation, which adjust for net changes in aggregate activity.
- Second, as of December 2020, firm-level forecasts of sales revenue growth over the next year imply a continuation of recent changes, not a reversal. Firms hit most negatively during the pandemic expect (on average) to continue shrinking in 2021, and firms hit positively expect to continue growing.
- Third, COVID-19 shifted relative employment growth trends in favor of industries with a high capacity of employees to work from home, and against industries with a low capacity.
1 The SBU is a monthly panel survey of U.S. business executives that collects data on own-firm past, current, and expected future sales and employment. The Atlanta Fed recruits high-level executives to join the panel and sends them the survey via email, obtaining about 450 responses per month. The survey yields data on realized firm-level employment and sales growth rates over the preceding twelve months and subjective forecast distributions over own-firm growth rates at a one-year look-ahead horizon.
COVID-19 brought the most severe shock to hit the US economy since at least the Great Depression. Concerns over the direct impact of the virus and the associated public policy response ushered in an era of enormous uncertainty, and the outlook for growth remains relatively downbeat through 2021.
Somewhat paradoxically, 2020 was a banner year for equity markets. The early stages of the COVID-19 pandemic drove a spectacular rout in stock markets, but equity prices recovered sharply after March. By late December, the S&P 500 index stood about 10% higher than its peak pre-pandemic value in February 2020.
The authors employ the national firm-level Survey of Business Uncertainty (SBU)1 to make three observations about this “Main Street/Wall Street” dichotomy:
- First, equity market traders and executives of nonfinancial firms share similar assessments about uncertainty at one-year look-ahead horizons. That is, the one-year VIX (a real-time indicator of market volatility) has moved similarly to the authors’ survey-based measure of (average) firm-level subjective uncertainty at one-year forecast horizons.
- Second, looking within the distribution of beliefs in the SBU reveals that firm-level expectations shifted toward upside risk in the latter part of 2020. In this sense, decision makers in nonfinancial businesses share some of the optimism that seems manifest in equity markets.
- Third, and despite the positive shift in tail risks, overall uncertainty continues to substantially dampen capital spending plans, pointing to a source of weak growth in potential GDP.
The trajectory of business investment spending is of great importance for the future path of productivity and GDP. The authors believe that firm-level surveys like the SBU can, and will, play a key role in monitoring these important developments and, importantly, assessing the policy post-COVID landscape.
1 The SBU is a monthly panel survey of U.S. business executives that collects data on own-firm past, current, and expected future sales and employment. The Atlanta Fed recruits high-level executives to join the panel and sends them the survey via email, obtaining about 450 responses per month. The survey yields data on realized firm-level employment and sales growth rates over the preceding twelve months and subjective forecast distributions over own-firm growth rates at a one-year look-ahead horizon.
Countries with large natural-resource endowments are often less developed and more poorly governed than countries with fewer resources, a phenomenon economists and policymakers call the “resource curse”. Corruption plays a central role in the recourse curse because the need to secure access rights to deposits makes resource extraction (i.e., precious metal mining and oil drilling) inherently prone to corruption. While resource extraction might have a positive direct impact on economic activity, the corruption that often accompanies it can divert resources from local development projects, decrease the efficiency of resource allocation, and reinforce extractive political regimes, thereby attenuating the positive growth effects of extractive activities.
However, does this mean that all corruption is bad? Recent research has shown that anti-corruption regulations have deterred investment that otherwise would have occurred. In some countries with inefficient bureaucracies, corruption can provide a gateway to engage in business. Ultimately, the net economic impact of foreign corruption regulation also depends on how much the regulation decreases corruption, what regulated firms do instead of paying bribes, and whether the marginal investments forgone because of the regulation would have had a positive impact on development.
To address these questions, the authors examined changes in economic activity, as measured by nighttime light emissions in African communities near large resource extraction facilities, following an increase in enforcement of the US Foreign Corrupt Practices Act (FCPA) in the mid-2000s. Compared to other measures of economic development (e.g., GDP), luminosity reflects the level of economic activity more broadly, and thus is likely more indicative of the overall well-being of people throughout the community.
The authors find that after 2004 geographic areas with an extraction facility whose owner is subject to the FCPA gradually exhibit higher levels of economic activity relative to areas surrounding extraction sites that are not subject to the regulation. Local perceptions of corruption also significantly decline. The authors find that the observed increase in development and reduction in perceived corruption are driven (at least in part) by a change in how firms in and around the extractive sector behave.
For policymakers, this work suggests that foreign corruption regulation can be an effective instrument for changing corporate behavior and that, despite any increase in the costs of operating in high-corruption-risk countries, anti-corruption regulation originating in developed countries can have a positive impact on growth. This is important because developing countries may not themselves have the institutional strength or political will to address misconduct by multinational corporations.
Algorithms guide an increasingly large number of high-stakes decisions, including criminal risk assessment, resume screening, and medical testing. While such data-based decision-making may appear unbiased, there is increasing concern that it can entrench or worsen discrimination against legally protected groups. With algorithmic recommendations for pretrial release decisions, for example, a risk assessment tool may be viewed as racially discriminatory if it recommends white defendants be released before trial at a higher rate than Black defendants with equal risk of pretrial criminal misconduct.
How is it that discrimination can occur through logical, unfeeling, algorithms? The answer is in the data that feed the algorithms. Continuing with the pretrial release example, misconduct potential is only observed among the defendants who a judge chooses to release before trial. Such selection can introduce bias in algorithmic predictions but also complicate the measurement of algorithmic discrimination, since unobserved qualification cannot be conditioned on to compare white and Black treatment.
This paper develops new tools to overcome this selection challenge and measure algorithmic discrimination in New York City (NYC), home to one of the largest pretrial systems in the country. The method builds on previous techniques developed by the author to measure racial discrimination in actual bail judge decisions and leverages randomness in the assignment of judges to white and Black defendants. Applying their methods, the authors find that a sophisticated machine learning algorithm (which does not train directly on defendant race or ethnicity) recommends the release of white defendants at a significantly higher rate than Black defendants with identical pretrial misconduct potential.
Specifically, when calibrated to the average NYC release rate of 73 percent, the algorithm recommends an 8-percentage point (11 percent) higher release rate for white defendants than equally qualified Black defendants. This unwarranted disparity explains 77 percent of the observed racial disparity in release recommendations, grows as the algorithm becomes more lenient, and is driven by discrimination among individuals who would engage in pretrial misconduct if released.
Many western economies have seen significant declines in the labor share of income, which has led to calls for worker representation on corporate boards to ensure the interests and views of workers. Recent polls suggest that a majority of American voters support this idea, and leading politicians in the US and the UK are advocating a system of shared governance. However, there is little scientific evidence on whether such shared governance systems have their intended effect.
To address this question, the authors constructed a unique matched panel dataset of all workers, firms, and corporate boards in Norway for the period 2004-2014, allowing the authors to measure the worker representation status of firms and to follow workers over time, even if workers switched firms. Importantly, these rich data combined with institutional features allowed the authors to use a variety of research designs, including
- comparison of different groups of workers before and after a switch between firms with different representation status,
- the ability to incorporate changes in worker compensation in response to idiosyncratic shocks to firm performance,
- an event study analyzing the effect of worker representation,
- and the effects of a law regulating the rights to worker representation as a discontinuous function of firm size.
The authors find that a worker is paid more and faces less earnings risk if she gets a job in a firm with worker representation on the corporate board. However, these gains in wages and declines in earnings risk are not caused by worker representation; rather, the wage premium and reduced earnings risk reflect that firms with worker representation are likely larger and unionized, and that larger and unionized firms tend to both pay a premium and better insure workers against fluctuations in firm performance.
Bottom line: Conditional on the firm’s size and unionization rate, worker representation has little, if any, effect.
This research offers important insight for policymakers. Taken together, these findings suggest that while workers may indeed benefit from employment in firms with worker representation, they would not benefit from legislation mandating worker representation on corporate boards.
This paper offers unique insights into the effect of trade on those who own, work for, or sell to the supply chains of global firms that export and import—and those who do not. The authors address questions relating to the impact of such differences in trade exposure on earnings inequality. For example, if a country’s exports and imports were suddenly to drop to zero because of some extreme policy or natural disaster, would its distribution of earnings become more or less equal? In the absence of trade, would the consequences of domestic shocks for inequality be magnified or dampened?
Informing the authors’ analysis is a unique administrative dataset from Ecuador that merges firm-to-firm transaction data, employer-employee matched data, owner-firm matched data, and firm-level customs transaction records. Together with economic theory, this information allowed the authors to measure the export and import exposures of individuals—whether workers or capital owners—across the income distribution and, in turn, to infer the overall incidence of trade on earnings inequality.
The authors’ main empirical finding is that international trade substantially raises earnings inequality in Ecuador, especially in the upper half of its income distribution. In the absence of trade, top-income individuals would be relatively poorer. However, their empirical analysis also implies that the drop in inequality that took place in Ecuador over the last decade would have been less pronounced if its economy had been subject to the same domestic shocks, but unable to trade with the rest of the world.
Further, the authors find that the import channel is the dominant force linking trade to inequality in Ecuador, with gains from trade for individuals at the 90th percentile of the income distribution that are about 11% larger than the median—and up to 27% larger than the median for those at the top income percentile. However, these results also imply that the drop in inequality observed in Ecuador over the last decade would have been less pronounced in the absence of trade. The authors stress that some of these conclusions may not carry over to other contexts. The fact that export exposure is more pronounced in the bottom half of Ecuador’s income distribution, for instance, is more likely to hold in developing countries that, like Ecuador, specialize in low-skill-intensive goods, than in developed countries that do not.
Economists have long strived to develop measures of business expectations, but those efforts have provided few direct measures of business-level expectations for real variables beyond qualitative indicators and point forecasts—at least till now.
This paper describes the first results of an ambitious survey of business expectations conducted as part of the Census Bureau’s Management and Organizational Practices Survey (MOPS), the first large-scale survey of management practices in the United States, covering more than 30,000 plants across more than 10,000 firms. Conducted in 2010 and 2015, the size and high response rate of the dataset, its coverage of units within a firm, links to other Census data, and its comprehensive coverage of manufacturing industries and regions makes MOPS a uniquely powerful source of data for analyzing business expectations.
As part of the 2015 MOPS, the authors asked eight questions about plant-level expectations of own current-year and future outcomes for shipments, employment, investment expenditures and expenditures on materials. The survey questions elicited point estimates for current-year (2016) outcomes and five-point probability distributions over next-year (2017) outcomes, yielding a much richer and more detailed dataset on business-level expectations than previous work, and for a much larger sample.
Importantly, 85% of surveyed firms provided logically sensible responses to the authors’ five-point distribution questions, suggesting that most managers could form and express detailed subjective probability distributions. The other 15% were plants with lower productivity and wages, fewer workers, lower shares of managers with bachelor’s degrees, and lower management practice scores and that were less likely to belong to multinational firms. First and second moments of plant-level subjective probability distributions covary strongly with first and second moments, respectively, of historical outcomes, suggesting that the subjective expectations data are well-founded. Aggregating over plants under common ownership, firm-level subjective uncertainty correlates positively with realized stock-return volatility, option-implied volatility, and analyst disagreement about the future earnings per share (EPS) for both the parent firm and the median publicly listed firm in the firm’s industry.
Cross-checking MOPS data with other manufacturing datasets allowed the researchers to match the MOPS forecasts to realized outcomes. Using those realized values, the authors find that forecasts are highly predictive of outcomes. In fact, these forecasts are substantially more predictive than historical growth rates. They also find that forecast errors rise in magnitude with ex ante subjective uncertainty. Forecast errors correlate negatively with labor productivity. Forecast accuracy improves with greater use of predictive computing and structured management practices at the plant, and with a more decentralized decision-making process across plants in the same firm.
Using newly collected data of arguably the most horrendous episode of discrimination in human history, the treatment of Jews in Nazi Germany, the authors examined how the removal of senior managers of Jewish origin, caused by the rise of antisemitism in Nazi Germany, affected large German firms. In doing so, they provide insights into the question of how individual managers can affect firm performance, an issue that has long vexed researchers.
The authors collected the names and characteristics of individuals holding around 30,000 senior management positions in 655 German firms listed on the Berlin Stock Exchange, as well as data on stock prices, dividends, and returns on assets. While the fraction of Jews among the German population in the early 1930s was only 0.8%, the authors’ data show that 15.8% of senior management positions in listed firms were held by individuals of Jewish origin in 1932 (whom the authors term “Jewish managers”). Jewish managers had exceptional characteristics compared to other managers in 1932. For example, Jewish managers were more experienced, educated, and connected (by holding positions in multiple firms). After the Nazis gained power, the share of Jewish managers plunged sharply in 1933 (by about a third) and dropped to practically zero by 1938.
This research revealed four main results:
- The expulsion of Jewish managers changed the characteristics of managers at firms that had employed a higher fraction of Jewish managers in 1932. The number of managers with firm-specific tenure, general managerial experience, university education, and connections to other firms fell significantly, relative to firms that had employed fewer Jewish managers in 1932. The effects persisted until at least 1938, the end of the authors’ sample period on manager characteristics.
- The loss of Jewish managers reduced firms’ stock prices. After the Nazis came to power, the stock price of the average firm that had employed Jewish managers in 1932 (where 22% of managers had been of Jewish origin) declined by 10.3 log points, relative to a firm without Jewish managers in 1932. These declines persisted until the end of the stock price sample period in 1943, ten years after the Nazis had gained power.
- Losing Jewish managers lowered the aggregate market valuation of firms listed in Berlin by 1.8% of German GNP. This calculation indicates that highly qualified managers are of first-order importance to aggregate outcomes and that discriminatory dismissals can cause serious economic losses.
- After 1933, dividends fell by approximately 7.5% for the average firm with Jewish managers in 1932 (which lost 22% of its managers). Also, the average firm that had employed Jewish managers in 1932 experienced a decline in its return on assets by 4.1 percentage points. These results indicate that the loss of Jewish managers not only reduced market valuations, but also led to real losses in firm efficiency and profitability.
These findings offer lessons for today. The US travel ban on citizens of seven Muslim-majority countries, for example, or the persecution of Turkish businessmen who follow the cleric Fethullah Gülen, could lead to a loss of talent. Further, the authors note a post-Brexit survey in 2017 revealing that 12% of continental Europeans who make between £100,001 ($130,000) and £200,000 a year planned to leave the United Kingdom. Bottom line: The authors warn that such an exodus, and similar outflows of talented managers, could have meaningful economic consequences.
Cybersecurity risk is at the top of many firms’ worry lists, and rightly so. Despite substantial investments in information security systems, firms remain highly exposed to cybersecurity risk, with possible losses amounting to $6 trillion annually by 2021. One open question for researchers has been whether a firm’s exposure to cybersecurity risk is priced into financial markets.
To address this question, the authors developed a firm-level measure of cybersecurity risk for all listed firms in the US, which allowed them to examine whether cybersecurity risk is priced in the cross section of stock returns. The authors analyzed firms that were subject to cyberattacks as a training sample, and then they compared the wording and language in the relevant risk-disclosure section in annual reports of the attacked firms with that of all other firms. They first extracted the discussion on cybersecurity risk in the firms’ 10-K reports from 2007-2018, which contain information about the most significant risk factors for each firm.
Next, they identified a sample of firms that were subject to a major cyberattack (involving lost personal information by hacking or malware-electronic entry by an outside party) in any given year, arguing that those firms have high cybersecurity risk, and which then served as the authors’ training sample. Finally, they estimated the similarity of each firm’s cybersecurity-risk disclosure with past cybersecurity-risk disclosures of firms in the training sample (i.e., from the one-year period prior to the firm’s filing date). The higher the measured similarity in cybersecurity risk disclosure for their sample firms and firms in the training sample, the greater the exposure to cybersecurity risk.
The authors then subject these measures to a number of validations that, in the end, drive their finding that firms with high exposure to cybersecurity risk outperform other firms by up to 8.3% per year. Among other findings, they offer one important caveat: A cybersecurity-mimicking portfolio performs poorly in times of heightened cybersecurity risk and investors’ concerns about data breaches. These results support the predictions of asset-pricing theory that investors require compensation for bearing cybersecurity risk.
Many central banks and policymaking institutions around the world are openly debating the introduction of a central bank digital currency, or CBDC, a potential watershed for the monetary and financial systems of advanced economies.
Since at least the classic formulation of Bagehot in 1873, central banks have viewed their primary tasks as maintaining stable prices and ensuring financial stability through their role as lenders of last resort. With a CBDC, two additional and significant aspects come into play. First, a CBDC may become an attractive alternative to traditional demand deposits in private banks for all households and firms. Second, and as a result, the central bank may be transformed into a financial intermediary that needs to confront classic issues of banking, including maturity transformation and the exposure to a demand for liquidity induced by “spending” shocks (runs) of its private customers.
The authors examine the interplay of these new and traditional roles to evaluate the advantages and drawbacks of introducing a CBDC relative to the subsequent reorganization of the banking system and its consequences for monetary policy, allocations, and welfare. Building on, and then departing from, existing models which reveal that the optimal amount of risk-sharing among banks requires making them prone to bank runs, the authors ask whether central banks can avoid this problem.
In the authors’ model (and to briefly summarize here), classic bank runs may still occur due to a rationing problem, when liquidating illiquid real assets at a given price level. But since a central bank controls the price level and contracts are nominal, it can avoid rationing if it prefers. By issuing more currency, the monetary authority can always deliver on its obligation, but at the risk of inflation. Thus, their model illustrates how runs on a central bank can manifest themselves in two ways: either as a classic run, caused by the rationing of real assets, or as a run on the price level.
Now, imagine that a central bank has three goals: efficiency, financial stability (i.e., absence of runs), and price stability. The authors demonstrate an impossibility result that they term the CBDC trilemma: Of its three goals, the central bank can achieve at most two (see accompanying figure). For example: the authors demonstrate that the central bank can always implement the socially optimal allocation in dominant strategies and deter central bank runs at the price of threatening inflation off-equilibrium. If price-stability objectives for the central bank imply that the central bank would not follow through with that threat, then allocations either have to be suboptimal or prone to runs.
Bottom line: A central bank that wishes to simultaneously achieve a socially efficient solution, price stability, and financial stability (i.e., absence of runs) will see its desires frustrated. This work reveals that a central bank can only realize two of these three goals at a time.
US student loan debt reached $1.6 trillion in 2020, with calls for debt relief growing in strength as that number rises. However, not all debt forgiveness plans are created equal, and the impacts vary depending on the relative income of borrowers. For example, debt forgiveness can be universal, capped at a certain amount, or targeted to specific borrowers. Importantly, while much recent media and policy attention has focused on universal forgiveness, many may not realize that some student borrowers are already granted relief through an Income-Driven Repayment (IDR) plan, which links payments to income and which forgives remaining debt after, say, 20 or 25 years, depending on the plan. This means that low-income earners can receive substantial loan forgiveness over time.
To analyze policy options, the authors used the 2019 Survey of Consumer Finances (SFC) to estimate the present value of each student loan, and to forecast future payments and the evolution of a loan’s balance until it reaches zero or is forgiven. Regarding universal plans (forgiving all loans) or capped (forgiving loans to a certain amount), the authors find that these policies disproportionately accrue to high-income households. For example, individuals in the bottom half of the earnings distribution would receive 25% of the dollars forgiven. Households in the top 30% of the earnings distribution receive almost half of all dollars forgiven.
Next, the authors examined who would benefit from a more generous IDR plan that raised the threshold above which borrowers must pay a portion of their income, and which accelerated loan forgiveness. In contrast to universal forgiveness, expanding IDR leads to substantial forgiveness for the middle of the earnings distribution. Under a policy enrolling all borrowers who would benefit from IDR, individuals in the bottom half of the earnings distribution would receive two-thirds of dollars forgiven, and borrowers in the top 30% of the earnings distribution receive one-fifth of dollars in forgiveness. Raising the threshold above which borrowers pay a portion of their income and earlier loan forgiveness both lead to a large increase in forgiveness. However, under accelerating loan forgiveness, these benefits accrue to the top of the earnings distribution, while increasing the repayment threshold leads to large benefits for middle-income borrowers.
In sum, the authors find that universal and capped forgiveness policies are highly regressive, with the vast majority of benefits accruing to high-income individuals. On the other hand, IDR plans that link repayment to earnings lead to forgiveness for borrowers in the middle of the income distribution.
Since 1960, at least 115 foreign military occupations have ended, with a substantial percentage of these interventions involving a security transition from withdrawing troops to local allies, including a redeployment of weaponry. Despite these many transitions, little is known about the conflict dynamics of countries experiencing a foreign-to-local security transition.
This research offers new insights into these issues by conducting a microlevel study of the impact of the large-scale security transition that marked the end of Operation Enduring Freedom in Afghanistan—the long-running military campaign of the North American Treaty Organization (NATO). Planning for this transition to Afghan forces began as early as 2010 and was formally announced in 2011. The transition was staggered and coordinated around administrative districts. Over three years, and five transition tranches, Afghanistan’s districts were transferred to Afghan control.
The authors employed a unique dataset, including geotagged and time-stamped event data that documents dozens of different types of insurgent and security force operations, representing the most complete catalog of conflict activity during Operation Enduring Freedom currently available. They combined these observational data with microlevel survey data that included questions measuring perceptions of security conditions, the extent of local security provision, and perceptions of territorial control.
The authors find a significant, sharp, and timely decline of insurgent violence in the initial phase, the security transfer to Afghan forces, followed by a considerable surge in violence in the second phase, the actual physical withdrawal of foreign troops. Why does this happen? The authors argue that this pattern is consistent with a signaling model in which the insurgents reduce violence strategically to facilitate the foreign military withdrawal; after the troops are gone, the insurgents capitalize on their absence.
These findings clarify the destabilizing consequences of withdrawal in one of the costliest conflicts in modern history and yield potentially actionable insights for designing future security transitions.
One of the looming pandemic-related questions for the US economy is to what degree workers will remain working from home when the pandemic ends. By some estimates, roughly half of all work occurred at home, either in whole or in part, through October 2020. Crucial to this question is not only whether workers can work from home, but whether they should. Put another way, does worker productivity suffer when it occurs at home?
The authors surveyed 15,000 working-age Americans between May and October 2020 in waves, and the authors’ analysis of those responses reveals the following five reasons why working from home will likely stick:
- Reduced stigma. Most respondents report perceptions about working from home have improved among people they know.
- Employer learning. The pandemic forced workers and firms to experiment with working from home en masse, enabling them to learn how well it actually works.
- New investment. The average worker invested over 13 hours and about $660 dollars in equipment and infrastructure to facilitate working from home, amounting to 1.2% of GDP. In addition, firms made sizable investments in back-end information technologies and equipment to support working from home.
- Lingering fear. About 70% of respondents expressed a reluctance to return to some pre-pandemic activities, even when a vaccine is widely available, for example, riding subways and crowded elevators, or dining indoors at restaurants.
- New technologies. The rate of innovation around technologies that facilitate working from home has likely accelerated.
Network effects are likely to amplify the impact of these five mechanisms. For example, coordination among several firms will facilitate doing business while their employees are working from home. When several firms are operating partially from home, it lowers the cost for other firms and workers to do the same, creating a positive feedback loop.
For dense cities like New York and San Francisco, a pronounced shift to working from home will likely have a negative effect. The authors estimate that worker expenditures on meals, entertainment, and shopping in central business districts will fall by 5% to 10% of taxable sales.
Finally, many workers reported higher productivity while working from home during the pandemic than previously. Taking the survey responses at face value, accounting for employer plans about who gets to work from home, and aggregating, the authors estimate that worker productivity will be 2.4% higher post-pandemic due to working from home.
Are large banks good? On the one hand, size implies efficiencies of scale and an improvement in the delivery of financial services, which is good for the economy. On the other hand, size may encourage risky behavior and increase systemic risk if a big bank behaves badly and fails.
These are empirical questions, and Huber analyzes a rare period in postwar Germany when banking reforms determined when certain state-level banks were allowed to consolidate into national banks. Under these reforms, increases in bank size were exogenous to the performance of banks and their borrowers, which allowed Huber to estimate how changes in bank size causally affected firms in the real economy.
Huber digitized new microdata on German firms and their relationship banks to examine how the bank consolidations affected the growth of banks and their borrowers. His findings were clear: there was no evidence that increases in bank size raised the growth of borrowers. Firms and municipalities with higher exposure to the consolidating banks did not grow faster after their banks consolidated. Small, young, and low-collateral borrowers of the banks actually experienced lower employment growth after the consolidations. Further, the consolidating banks themselves did not increase lending, profits, or cost efficiency, relative to comparable other banks. The results show that increases in bank size do not always generate improvements in the performance of banks and their borrowers and might even harm some firms.
For policymakers, the impact of bigger banks remains a complex question that not only depends on whether a large bank operates efficiently, but on the net impact on other mechanisms, including the benefits and costs for borrowing firms. Huber’s analysis reveals that experience in postwar Germany highlights that the beneficial mechanisms are not always powerful enough to outweigh the harmful effects.
New private firms in China benefit heavily from investor relationships with state-owned firms or private owners that have equity ties to state owners. To document the importance of “connected” investors, the authors employed administrative registration data on the universe of Chinese firms from 2000 to 2019. These data provide information on the owner of every Chinese firm, which the authors used to identify firms with connected investors defined as state-owned firms, or private owners with equity ties to state-owned firms.
This ownership information reveals two key facts. First, there is a clear hierarchy of private owners in terms of the closeness of their equity links with state owners. In 2019, state owners had equity stakes in the firms of about 100 thousand private owners. These private owners are the largest in China and also hold equity in the companies of other, typically smaller, private owners. In turn, these private owners also invest in other, even smaller, private owners, and so on. At the very bottom of the hierarchy are owners that are up to forty steps away from the state owners at the top of the hierarchy and that do not invest in other owners. The very smallest private owners thus do not have any equity ties, direct or indirect, with state owners.
Second, the hierarchy of private owners with connected investors is a relatively recent phenomenon. In 2000, private owners with connected investors only accounted for about 16% of registered capital. By 2019, private owners with connected investors owned about 35% of all registered capital in China. The 19.5 percentage point increase in the share of connected private owners from 2000 to 2019 contributes a significant part of the increase in the share of all private owners over this period.
The growth of this hierarchy of connected owners is driven, in a proximate sense, by two related trends, broadly described here and in greater detail in the authors’ paper. First, in 2000, only 12% of state owners had joint ventures with private owners. By 2019, about a quarter of all state owners had such joint ventures. The result is that the number of private owners with joint ventures with state owners increased from about 20 thousand in 2000 to more than 100 thousand by 2019.
Second, private owners associated with the state also now undertake more investments with other private owners. For example, the 20 thousand private owners with joint ventures with state owners in 2000 themselves had joint ventures with less than 1.5 other private owners in that year. In 2019, the 100 thousand private owners directly connected with state owners were themselves the “connected investor” for 3.5 other private owners on average. The result is that the number of private owners invested by the directly connected private owners (i.e., two steps away from the state) increased from 23 thousand in 2000 to more than 300 thousand by 2019.
By 2019, the assets of connected private owners accounted for 35% of total assets in China, or about 45% of total assets of all private owners. At the same time, the share of connected state owners, the owners at the “top of the food chain” of the connected sector, was merely 21%, or 60% less than the share of connected private owners.
The authors estimate that the expansion of connected private owners may be responsible an average annual growth of 4.2% in aggregate output of the private sector between 2000 and 2019.
The COVID-19 pandemic has led to a surge in demand for medical care, and healthcare systems across the United States have faced the risk of being overwhelmed. This creates an opportunity to study the labor markets that hospitals use to manage temporary staffing shortages. How effective are short-term labor markets at re-allocating workers to where they’re needed most?
Using data from a healthcare staffing firm, the authors study flexibility of nurse supply across the United States. At different points throughout the spring and summer, hospitals in affected regions needed more nurses to deal with pandemic-related surges. The authors find that job postings for temporary nurse positions tripled from their usual rate at the height of the pandemic’s first wave, and increased even faster in places facing extreme pandemic conditions. In New York state, job postings increased eightfold, while the compensation almost doubled.
The differences across states and across nursing specialties allow the authors to study workers’ flexibility in this market. For example, there was little-to-no increase in wages for nurses working in labor and delivery units, as the first wave of the pandemic did not change the number of women who were already pregnant.In contrast, demand skyrocketed for for nurses in intensive care units (ICU) and emergency rooms (ER). For these specialties, the number of job openings and compensation rates are positively associated with state-level COVID-19 case counts. In other words, more acutely ill COVID-19 patients implies increased need for traveling nurses, and higher payments required to recruit them. Based on one estimate, ICU jobs increased by 239 percent during the first wave of the pandemic, while compensation increased 50 percent. ER jobs increased by 89 percent while compensation increased by 27 percent.
The large size of the United States, and nurses’ ability to work in different states, appears to be an important part of how this market adapted to the first waves of demand for COVID-19 nursing. An analysis by the authors demonstrates that the increases in quantity may understate the willingness of ICU and ER nurses to travel, given relatively higher compensation. In economic terms, they find nursing supply to be highly elastic, which suggests that price signals are an effective way of reallocating nurses to the parts of the country with increased staffing needs. Likewise, they find that workers who accept such postings travel longer distances from their homes to job locations when pay is higher.
This work suggests that a national staffing market may offer timely flexibility to accommodate demand shocks. When demand increases in specific geographic areas, nurses’ ability to move can help mitigate a local shortage. That said, adjusting to a simultaneous national demand shock is harder. If numerous different regions experience simultaneous COVID-19 surges, meeting demand may require more than mobility across regions. Even though some nurses can travel, there is still a limited national supply of those with skills in demand.
Stock markets cratered after mid-February 2020 in countries around the world, as the coronavirus pandemic spread beyond China. In what many see as a puzzle, the global stock market recovered more than half its losses from March 23 to late May. US stock market behavior, in particular, has prompted much head scratching: Despite a failure to control the pandemic, the US stock market recovered 73% of its lost value by the end of May and 95% by July 22.
The authors show that stock prices and workplace mobility (a proxy for economic activity) trace out striking clockwise paths in daily data from mid-February to late May 2020. Global stock prices fell 30% from February 17 to March 12, before mobility declined. Over the next 11 days, stocks fell another 10 percentage points as mobility dropped 40%. From March 23 to April 9, stocks recovered half their losses and mobility fell further. From April 9 to late May, both stocks and mobility rose modestly. The same dynamic played out across the vast majority of the 31 countries in the authors’ sample.
A second finding reveals that stock prices were lower when countries imposed more stringent market lockdown measures: national stock prices are 3 percentage points lower when the own-country lockdown stringency index is one standard deviation higher, and 4.7 points lower when the global average stringency index is one standard deviation higher. These are separate effects, and both are highly statistically significant.
The authors also closely analyzed stock prices in the world’s two largest economies—China and the US. They find that the COVID-19 pandemic had much larger effects on stock prices and return volatilities in the US than in China. At least in part, the larger impact on American stock prices reflects China’s greater success in containing the pandemic. However, the authors stress that the US stock market shows a much greater sensitivity to pandemic-related developments long before it became evident that its early containment efforts would flounder.
To reduce the risk of exposure to the COVID-19 virus, roughly one-third of the American labor force has been working from home. Household expenditures have also changed dramatically, reflecting both the loss of income and consumption opportunities, and a shift toward household production. Additional time and consumption at home requires significant increases in electricity consumption. This represents an additional and essential expense at a time that many households are also experiencing severe economic hardship.
Using data that provides hourly residential electricity consumption in Texas, along with another dataset that reports monthly consumption of electricity by customer class (residential, commercial, and industrial) for most U.S. utilities, the author found that the increase in residential consumption corresponds with those workers able to work from home. Also, while rising unemployment is strongly associated with commercial and industrial electricity declines, it is weakly associated with residential increases. Non-essential business closures do not have statistically significant impacts on usage beyond the direct potential employment effects.
Further, the author finds that the increase in residential consumption is not common in economic downturns; for example, it did not occur during the Great Recession. From April to July 2020, American households spent nearly $6 billion in excess residential electricity consumption. Electricity bills were over $20/month higher on average for utilities serving one-fifth of US households. This increased expenditure reduces the net benefits of working from home associated with less commuting and improved environmental quality. As industrial and commercial activity recovers, working from home has the potential to increase emissions from the power sector on net. In the same way that dense cities are more energy efficient than suburbs, it requires more energy to heat and cool entire homes than the offices and schools.
The COVID-19 pandemic triggered a shift to working-from-home (WFH) that has already saved billions of hours of commuting time in the United States alone. The authors tap several sources, including original surveys of their own design, to quantify this time-saving effect and to develop evidence on how Americans are using the time savings.
Over the course of May, July, and August 2020, the authors surveyed 10,000 Americans aged 20-64 who earned at least $20,000 in 2019: 37.1% worked from home, 34.7% worked on business premises, and the rest were not working. These figures imply that WFH accounts for 52.3% of employment in the pandemic economy, which is similar to other estimates. By way of comparison, American Time Use Survey data imply a 5.2% WFH rate among employed persons before the pandemic.
To calculate aggregate time savings from increased WFH, the authors gathered data from two national surveys to determine the number of commuting workers and average commuting times. They find that commuting time dropped by 62.4 million hours per day. Cumulating these daily savings from mid-March to mid-September, the authors find that aggregate time savings is more than 9 billion hours.
The accompanying figure illustrates that people spent over one-third of their extra time on their primary job, and nearly one-third on childcare, outdoor leisure and a second job, combined.
As the travel industry experiences a pandemic-induced slump, many are wondering about the future of air travel and how long it will take until people are comfortable enough to fly for work or leisure.
According to the recent Survey of Business Uncertainty, conducted July 13-24, the authors find that firms anticipate slashing their post-pandemic travel budgets and tripling the share of external meetings (those with external clients, patients, suppliers, and customers) conducted virtually.
The authors’ findings cast doubt on the prospect for a quick and complete rebound in business travel. Firms anticipate slashing their pre-pandemic travel expenditures by nearly 30 percent when concerns over the virus subside (see Figure 1). The expected decline in travel expenditures is particularly severe for information, finance, insurance, and professional and business services, which are marking in a nearly 40 percent reduction in travel spending after the pandemic ends.
Such a large, broad-based reduction in travel spending not only suggests a sluggish and potentially drawn-out recovery for the travel, accommodation, and transportation industries, but it also indicates that firms expect to shift from face-to-face meetings to lower-cost virtual meetings. And, as Figure 2 shows, that’s exactly what the authors found when they asked firms about the share of virtual meetings that they held in 2019 versus the share that they anticipate holding in a post-COVID world.
The authors provide evidence that COVID-19 shifted the direction of innovation toward new technologies that support video conferencing, telecommuting, remote interactivity, and working from home (collectively, WFH).
By parsing automated readings of the subject matter content of US patent applications, the authors find clear evidence that patents for WFH technologies are advancing at an accelerated rate. The accompanying figure reports the percentage of newly filed patent applications that support WFH technologies at a monthly frequency from January 2010 through May 2020. Interestingly, the WFH share of new patent applications rises from 0.53% in January 2020 to 0.77% in February, before the World Health Organization declared the novel coronavirus outbreak a global pandemic. China reported the first death from COVID-19 in early January and imposed a lockdown in Wuhan on January 23, 2020. By the end of January, the virus had spread to many other countries, including the United States. This figure suggests that these developments had already—by February—triggered the beginnings of a shift in new patent applications toward technologies that support WFH.
By March, COVID-19 cases and deaths had exploded in many localities and countries around the world. As the figure illustrates, the WFH percentage of new patent applications from March to May are nearly twice as large as the January value, providing clear evidence for the authors that COVID-19 has shifted the direction of innovation toward technologies that support WFH.
The authors use individual and household-level micro data to document that those workers who have particularly low earnings, low wealth and low buffers of liquid assets are the ones employed in social-intensive occupations where they must show up for work. On the other hand, workers in flexible occupations with low social exposure tend to have higher earnings, robust balance sheets, and enough liquid wealth to weather the storm.
This strong positive correlation between economic exposure to the pandemic and financial vulnerability suggests that the effects of the pandemic have been extremely unequal across the population. This means that there are a range of economic and health policy options, with appropriate patterns of redistribution, that can be used to contain the virus and mitigate its economic effects.
The accompanying charts illustrate this phenomenon, and include the following occupational distinctions:
- Essential: Jobs that are needed for the economy to function and cannot be performed remotely, like nurses, firefighters, or mail carriers.
- Low social intensive/high flexibility: Remote jobs where products do not require high social density, like writers, software developers, and accountants.
- Low social intensive/low flexibility: Jobs that mostly require on-site presence but still allow for social distancing, like carpenters, electricians, and plumbers.
- High social intensive/high flexibility: Jobs that are best performed when workers are in contact with customers or other workers, but which can also be done remotely, like teachers and therapists.
- High social intensive/low flexibility: Jobs where workers need to need to be in close contact with customers or other workers, on-site, like cooks, waiters, and many performance artists.
Chart 1 reports the average earnings and employment shares of each of the five occupations; average annual earnings are highest for those with high flexibility to work remotely and low social interaction ($79,000), and lowest for those with low flexibility and high social interaction ($32,000). Chart 2 reveals that workers in rigid and essential occupations are significantly more financially vulnerable than those in flexible occupations.
How has government stimulus affected economic welfare? The Coronavirus Aid, Relief, and Economic Security (CARES) Act is a $2.2 trillion economic stimulus bill enacted in the spring of 2020 to support American families, workers and businesses. The authors find that programs under the CARES Act succeeded in mitigating economic welfare losses by around 20% on average, while leaving the cumulative death count effectively unchanged.
The model focused on the four most important components of the CARES Act for household welfare:
- Economic Impact Payments (EIP);
- Expanded Unemployment Insurance (UI);
- The Paycheck Protection Program (PPP); and
- Waiving of tax penalties for retirement account withdrawals.
Figure 1 presents a range of policy options that can be quantitatively compared. The mean with fiscal support from the CARES Act shifts the Pandemic Possibility Frontier forward in the United States, allowing for the same number of fatalities with lower economic costs. In comparison, with the laissez-faire approach, fatalities are highest and the average economic costs of the pandemic are around two months of income because individuals react to rising infections by reducing both social consumption and supply of workplace hours.
The impact of the stimulus package on economic aggregates is substantial. Both the transfer programs (EIP, UI) and PPP boost aggregate consumption by around 6 percentage points, with about 4 points coming from PPP and the remainder from UI and EIP.
However, the stimulus package made the economic consequences of the pandemic more unequal. The stimulus package redistributed heavily toward low-income households, while middle-income households gained little from the stimulus package but will face a higher future tax burden.
In the model, labor incomes fall most for the lowest quartile of the pre-pandemic income distribution and remain persistently low. The drop in labor earnings for workers at the bottom of the income distribution was at least 10 percentage points deeper than those at the top of the income distribution.
Oddly, while labor incomes have fallen more for poor households than for rich ones, and have remained persistently low, consumption expenditures of the poor initially fell by the most but then recovered more quickly than those of the rich. Many households at the bottom of the income distribution with liquidity constraints actually experienced large increases in their total incomes. For many in the bottom distribution, UI benefits exceeded their incomes (with replacement rates over 100%), and recipients of stimulus checks living hand-to-mouth spent their benefits in the first weeks after receipt. As a result, households with lower earnings, greater income drops, and lower levels of liquidity displayed stronger spending responses.
A consequence of the CARES Act is a large increase in government debt. The model shows that after eighteen months, the debt-to-GDP ratio increases by about 12% above its pre-pandemic level, compared with an increase of 3% without the stimulus package.
The debate about how to manage the health and economic effects of the COVID-19 pandemic revolves around varying degrees of lockdown vs. no lockdown at all. However, in their recent paper that describes the distributional effects of existing policies, Greg Kaplan, Benjamin Moll, and Giovanni Violante offer a novel alternative. Instead of shutting down businesses or allowing partial openings to prevent people from gathering and spreading the disease, why not tax people’s behavior instead?
In economic parlance, taxes that are meant to drive behavior to achieve a certain goal are known as Pigouvian taxes, after the English economist A.C. Pigou (1877-1959). An example is a factory that emits lots of air pollution, called a negative externality, which creates problems downwind at little extra cost to the factory. One way to get the factory to scrub its emissions is to tax it relative to the social costs that it is imposing.
Such taxes are also enacted to modify the negative externalities of personal behavior, like drinking alcohol and smoking cigarettes. And it is with personal behavior that the authors apply the idea of Pigouvian taxes to the question of how best to limit the negative health and economic effects of COVID-19. Put directly: If you want to restrict the number of people that gather in a bar to have a drink, then you could tax that drink at such a level that you will attain adequate social distancing without closing the bar. Too many people want to attend a baseball game? Price the tickets to optimize attendance. The same holds true for work. Do people feel the need to attend their workplace even if their job does not require their presence on-site? Then make them pay a tax approximate to the cost that they are inflicting on society. Such a tax will keep most workers at home.
However, either one of these taxes is particularly bad for a subset of individuals – in the case of a tax on social consumption, those working in the social sector, and in the case of a tax on on-site work, those in rigid occupations who must show up for work. These costs can be partially mitigated by using the revenues from the tax to provide lump-sum subsidies to precisely those workers that are most adversely affected.
This is a simple description of the authors’ more detailed analysis, which employs their distributional pandemic possibility frontier (PPF) analysis, a technique that describes the heterogeneous effects of policies. In the accompanying figures, this dispersion of effects is shown by the colored bands that extend around the bold lines. Figure 1 (orange line) traces the PPF for a 30% tax on social consumption that is kept in place for different durations. Deaths due to COVID-19 are plotted on the horizontal (x) axis, and economic cost, as measured in multiples of monthly income, is on the vertical (y) axis. As we can see, the longer a policy is kept in place, the greater is the dispersion in welfare cost.
Alternatively, policymakers could impose a tax on hours worked in the workplace and then rebate the proceeds to workers in occupations that demand their appearance. This tax targets the labor supply margin as the source of the negative externality, as opposed to the social consumption margin. The green line in Figure 1 traces the PPF for a 30% tax on workplace hours with different durations. This policy generates a flatter PPF than a social consumption tax. With a tax on workplace hours in place for 2 months, the mean economic welfare loss is about 2 times monthly income, which is about the same as in the laissez faire scenario, but with a substantially smaller number of deaths, by around 0.1% of the population.
The authors do not claim that such alternative policies would be politically expedient to implement, and they detail limitations and challenges in their paper. However, they stress the lesson that targeted policies do exist that offer a more favorable average trade-off between lives and livelihoods than blunt lockdowns.
These numbers suggest, among other things, that male football and basketball athletes subsidize other activities and other athletes. The data also raise questions about whether athletes could—or should—retain a higher percentage of their sports’ earnings. To investigate these and other questions, the authors collected comprehensive data covering revenue and expenses for FBS schools between 2006 and 2019, and assembled new data using complete rosters of students matched to neighborhood socioeconomic characteristics.
Among their findings, the authors estimate that rent-sharing leads to increased spending on women’s sports and other men’s sports as well as increased spending on facilities, coaches’ salaries, and other athletic department personnel. This transfer also occurs on a player level, that is, a subset of athletes are subsidizing others. Given the demographics of men’s football and basketball and those of other sports, the authors find that the existing limits on player compensation effectively transfers resources away from students who are more likely to be black and who come from poor neighborhoods toward students who are more likely to be white and come from higher-income neighborhoods.
Regarding compensation, the authors calculated a potential wage structure for football and men’s basketball players based on collective bargaining agreements in professional sports leagues, where athletes generally retain about 50 percent of earnings. They estimate that if FBS football and men’s basketball players split 50 percent of revenue equally, each football player would receive $360,000 per year and each basketball player would earn nearly $500,000 per year. If athletes were paid relative to how various positions are compensated, the two highest paid football positions (starting quarterback and wide receiver) would be paid $2.4 million and $1.3 million, respectively. Similarly, starting basketball players would earn between $800,000 and $1.2 million per year.
The authors have made the data in their paper publicly available online at here, for the benefit of future research.
Sixty-eight percent of workers who lost their jobs due to the COVID pandemic received benefits that exceeded their previous wages,¹ raising the question of whether those workers would decline offers to retake their old jobs at the prior wage.
To investigate this important policy question, the authors devised a model that approximates the environment faced by unemployed workers, including the short duration of the extra benefits, the likelihood their offer to take back their old job stays valid, the likelihood they will find another job if they turn down their previous employer’s offer, and related issues. They check their model’s results against available data. Except in special cases, the authors find that unemployed workers would accept the offer to return to their old jobs at their old wage.
The authors first consider what workers would do if they made an incorrect, static, decision: Keep the higher benefits or return to work at a lower wage. In such a case, 68% of workers would choose the higher benefits under the CARES Act. However, when workers weigh up these dynamic issues—like whether the benefits would end, whether the job offer was limited in time, and whether other jobs are available—most workers would accept the job offer and return to work. Only a worker with a low previous wage and an almost certain return-to-work offer would turn down their old job and remain unemployed under the CARES Act.
According to this analysis, the CARES Act did not cause high unemployment in April to July 2020 by decreasing labor supply. While the precise cause is beyond the scope of this research, the authors do note the likelihood of low labor demand, and/or low labor supply due to health risks.
1 See Ganong, P., P. Noel and J.S. Vavra (2020): “US Unemployment Insurance Replacement Rates During the Pandemic,” BFI Working paper and BFI COVID-19 Fact.
Signed into law on March 27, 2020, the CARES Act was exceptional both in size (over $2 trillion in allocated funds) and in the speed at which it was legislated and implemented. A major component was a one-time transfer to all qualifying adults of up to $1200, with $500 per additional child. How effective were these transfers in stimulating the consumption of recipients?
Using a large-scale survey of US households, the authors document that only 15% of recipients of this transfer say that they spent (or planned to spend) most of their transfer payment, with the large majority of respondents saying instead that they either mostly saved it (33%) or used it to pay down debt (52%). When asked to provide a quantitative breakdown of how they used their checks, US households report having spent approximately 40% of their checks on average, with about 30% of the average check saved and the remaining 30% used to pay down debt. Little of the spending went to hard-hit industries selling large durable goods (cars, appliances, etc.). Instead, most of the spending went to food, beauty, and other non-durable consumer products that had already seen large spikes in spending because of hoarding.
These average responses mask significant differences across households. For example, lower-income households were significantly more likely to spend their stimulus checks, as were households facing liquidity constraints. Individuals out of the labor force were also more likely to spend their checks than either employed or unemployed individuals, consistent with motives of consumption smoothing and hand-to-mouth behavior.
Other groups that were more likely to report spending most of their checks were those living in larger households, men, Hispanics and those with lower education. In contrast, African-Americans were much more likely to report using their checks primarily to pay off debt, as were older individuals, those with mortgages, unemployed workers and those reporting to have lost earnings due to COVID. For those who did not wish to spend their stimulus payment and had to decide whether to pay off debt or save their checks, higher-income individuals were more likely to save than pay off debts, those with mortgages or renters were much more likely to pay off debts, as were financially constrained individuals.
Finally, and importantly, 90% of employed workers who received a stimulus check reported that the transfer had no effect on their work effort (as opposed to, e.g., searching harder for new work) while 80% of those employed workers who did not qualify for a check reported that receiving such a check would not affect their work effort; the same holds for people out of the labor force. For unemployed workers, approximately 20% of those receiving a payment said that this made them search harder for a job, while two-thirds report that it had no effect.
These results suggest that additional payments to households during the height of the pandemic—either in the form of stimulus checks or additional UI benefits—are unlikely to negatively affect the recovery because of disincentives to work.
Political polarization and competing narratives can undermine public policy implementation. Partisanship may play a particularly important role in shaping heterogeneous responses to collective risk during periods of crisis when political agents manipulate signals received by the public (i.e., alternative facts). We study these dynamics in the United States, focusing on how partisanship has influenced the use of face masks to stem the spread of COVID-19.
Using a wealth of micro-level data, machine learning approaches, and a novel quasi-experimental design, we document four facts: (1) mask use is robustly correlated with partisanship; (2) the impact of partisanship on mask use is not offset by local policy interventions; (3) partisanship is the single most important predictor of local mask use, not COVID severity or local policies; (4) Trump’s unexpected mask use at Walter Reed on July 11, 2020, significantly increased social media engagement with, and positive sentiment toward, mask-related topics. These results unmask how partisanship undermines effective public responses to collective risk and how messaging by political agents can increase public engagement with mask use.
This research offers insights into the impact of the 2005 Bankruptcy Abuse Prevention and Consumer Protection Act (BAPCPA), which are especially as policymakers discuss bankruptcy reform proposals.
The authors find that bankruptcy filings fell by roughly 50 percent after BAPCPA, with about one million fewer bankruptcy filings in the two years after the law was passed. Reduced filings meant lower costs for credit card companies and, likewise, lower interest rates for credit card customers. The authors find that a one-percentage-point decline in bankruptcy-filing risk within a credit-score segment decreases average interest rates by 70–90 basis points.
The authors also addressed the important issue of who was prevented from filing bankruptcy because of BAPCPA when, indeed, they could have benefited from the relief; in this case, the adverse shock experienced by consumers when confronted with hospitalization. The results were stark. They find that an uninsured hospitalization increases the likelihood of filing for bankruptcy by 1.5 percentage points prior to BAPCPA, but by just 0.4 percentage points after the reform. Put another way, the authors find that uninsured hospitalizations result in a similar amount of debt sent to collections under both bankruptcy regimes, but 70 percent fewer bankruptcy filings after the reform. This reduction is persistent over time.
This final finding represents a key contribution of this research. Hospitalization is just one example of an adverse shock, but to the extent that this finding generalizes to other types of financial setbacks, these results provide suggestive evidence that the bankruptcies deterred by BAPCPA were not limited to the most “abusive” filings. Instead, these results imply that BAPCPA may have meaningfully reduced the insurance value of bankruptcy.
About one in five US workers received unemployment insurance benefits in June 2020, which is five times greater than the highest UI recipiency rate previously recorded. Yet little is known about how unemployment benefits are affecting the economy today. To fill this gap, the authors study the consumption of benefit recipients during the pandemic using data from the JPMorgan Chase Institute.
In normal times, spending among unemployment benefit recipients falls by about seven percent when they become unemployed because typical benefits replace only a fraction of lost earnings. However, the CARES Act added a $600 weekly supplement to state unemployment benefits, replacing more than 100 percent of lost earnings for two-thirds of unemployed workers. As a result, the authors find very different spending patterns for unemployed households during the pandemic.
Although average spending fell for all households as the economy shut down at the start of the pandemic, the authors find that unemployed households actually increased their spending beyond pre-unemployment levels once they began receiving benefits. The fact that spending by benefit recipients rose during the pandemic instead of falling, like in normal times, suggests that the $600 supplement has helped households to smooth consumption and thus propped-up aggregate demand.
The authors also examine spending patterns of the unemployed while waiting for benefits to arrive. Households that receive benefits soon after job loss show no relative decline in spending, while households that wait two months to receive benefits due to processing delays have large spending declines. Compared to the employed, spending falls by 20 percent prior to receiving benefits. This suggests that delays have imposed substantial hardship on benefit recipients.
This research offers insights into the evolving reactions of Americans to the COVID-19 pandemic along political lines, including their reactions to mask-wearing and the likelihood of further lockdowns. The project consists of seven survey waves beginning in April 2020 and ending in November 2020. These three select findings are compiled from the first five waves, conducted from April 6 to May 18.
1. A loss of income due to the pandemic led many to admit that COVID-19 crisis is worse than they expected, with this effect mitigated by the choice of news source.
In the first wave of the survey, commencing April 6, 35% of Republicans said the media were exaggerating the virus’ threat, compared to only 9% of Democrats. In the fourth wave beginning April 27, 57% of Republicans said the pandemic was worse than they expected, compared with 82% of Democrats. Importantly, as illustrated in the accompanying figure, respondents who lost income were more likely to report that COVID-19 was worse than anticipated: 62% vs. 48% for Republicans, and 84% vs. 75% for Democrats. Regarding media influence, among Republicans, 44% of those who watched Fox News were significantly less likely to report that the virus was worse than expected, compared with 56% of those Republicans who did not watch Fox News. Similarly, among Republicans, those who did not support Trump were 50% more likely to report that the crisis was worse than expected than those expected to vote for Trump.
2. An important factor influencing support for mask wearing is trust in the scientific community. This has decreased significantly among Republicans since the start of the pandemic.
Between the beginning and end of April 2020 (waves one through four), Democrats’ confidence in the scientific community was mostly unchanged, 70% vs. 68%. For Republicans, those numbers fell from 51% to 38%.
3. Political views and perception of the gravity of the crisis also influenced the likelihood of anticipating a second lockdown.
At the end of April, about 30% of Republicans said that the government should fully reopen the economy in May, compared to about 5% of Democrats. In Mid-May, the authors asked 398 Democrats and Republicans whether they thought their state would need to reintroduce lockdown measures before the end of the year; 43% of Republicans said that such a lockdown was likely vs. 76% of Democrats.
Finally, while the authors do not hazard predictions, they stress that their research reveals the influence of dramatic events in changing or reinforcing people’s views and preferences, even if those events occur over a short period. Their next survey, slated for October, will likely provide key insights leading into the election.
 Researchers at the Poverty Lab and the Rustandy Center for Social Sector Innovation at the University of Chicago are conducting this longitudinal survey in partnership with NORC at the University of Chicago, an independent, non-partisan research institution. The findings refer to different time frames according to the questions analyzed. Surveys are administered to the same sample of more than 1,400 Americans based on NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population.
While active funds as a whole experience outflows during the crisis, funds that apply exclusion criteria in their investment process receive net inflows. Funds with higher sustainability ratings from Morningstar also receive larger flows, driven especially by environmental concerns. The pre-crisis trend of flows toward sustainability-oriented funds thus continues during the COVID-19 crisis. The fact that investors retain their commitment to sustainability during a major crisis suggests they have come to view sustainability as a necessity rather than a luxury good.
Despite rich investment opportunities presented by market dislocations, most US active equity mutual funds underperform passive benchmarks between February 20 and April 30, 2020. The average fund underperforms the S&P 500 index by 5.6% during the ten-week period (29% annualized). The average underperformance relative to the style benchmark is 2.1% (11% annualized). Eighty percent of funds have negative CAPM alphas, and average fund alphas computed relative to five different factor models are all negative. These results undermine the popular hypothesis that active funds make up for their disappointing unconditional performance by performing well in recessions.
- COVID-19 Keeping Some Older Workers Home … Permanently
The arrival of COVID-19 resulted in dramatic changes in the US labor markets with initial claims skyrocketing and a sharp decline in labor-force participation of more than 7 percentage points. Less noticed was the key driver of the drop in labor-force participation: a wave of earlier than planned retirements. The authors use customized surveys on a panel of more than 10,000 Americans before, and at the onset of, COVID-19 to show that the share of Americans not actively looking for work because of retirement increased by 7 percentage points between January and early April of 2020.
This increase is more than twice as large among women as among men. This makes early retirement a major force in accounting for the decline in the labor-force participation. Given that the age distribution of the two surveys is comparable, this suggests that the onset of the COVID-19 crisis led to a wave of earlier than planned retirements. With the high sensitivity of seniors to the COVID-19 virus, this may reflect in part a decision to either leave employment earlier than planned due to higher risks of working or a choice to not look for new employment and retire after losing their work in the crisis.
To better understand which parts of the age distribution might drive the increase of retirees in their survey and whether economic incentives at least partially play a role, the authors plot in the accompanying figure the fraction of those claiming being retired (left scale) both in the pre-crisis wave (yellow line) and in the crisis wave (red) together with the difference between the two (blue line, right scale). The crisis has shifted the whole distribution up, that is, for each part of the age distribution a larger fraction of the survey population now claims being retired. Hence, even for those that are well before retirement age, the authors note a large increase in early retirement. Moreover, a notable jump in the difference occurs at age 66, which is the first year people can claim retirement benefits without penalty from the social security administration (SSA). Historically, only a few people returned from retirement to the labor force, which hints toward a sluggish recovery down the road.
- Paycheck Protection Program Exposure (PPPE) and Post-PPP Outcomes
This work builds on the authors’ late-April research (The Targeting of the Paycheck Protection Program) that did not find evidence that PPP funds flowed to areas that were more adversely affected by the economic effects of the pandemic, and that lender heterogeneity in PPP participation explains, in part, the weak correlation between economic declines and PPP lending.
In this work, the authors present two new findings:
- They reveal no evidence that the PPP had a substantial effect on local economic outcomes during the first round of the program. The authors examined weekly firm-level employment and shutdown data, and they confirmed this evidence using initial unemployment insurance claims at the county level. The absence of a significant effect on UI claims during the initial weeks of the program is striking, especially given that one motivation for the PPP was to provide “relief” for congested state unemployment insurance systems. If the significant funds disbursed by PPP had little effect on unemployment, then what did firms do with the extra cash? The answer follows:
- The authors draw on Census Small Business Survey data to reveal that firms used PPP funds to increase liquidity, to make loan payments, and to meet other financial obligations. For these firms, the PPP may have strengthened balance sheets at a time when shelter-in-place orders prevented workers from working, and when unemployment insurance was more generous than wages for a large share of workers. Importantly, this suggests that while employment effects are small in the short run, they may well be positive in the medium run because firms are less likely to close permanently. Finally, many less affected firms received PPP funding and may have continued as they would have in the absence of the funds, either by spending less out of retained earnings or by borrowing less from other sources.
For policymakers charged with crafting effective policies that meet desired goals, measuring the social insurance value of the PPP is essential. As data become available, the authors will continue to examine the program’s effects on firms’ ability to meet commitments, as well as other medium- and long-term effects.
The list of uncertainties surrounding the COVID-19 pandemic is long, beginning with health-related issues and extending to the economy, including infection rates, vaccine development, possible new infection waves, near-term policy effects, economic recovery rates, government interventions, shifts in consumer spending, and many other issues.
To get their hands around the nature and scope of economic uncertainty before and during the pandemic, the authors examined a number of measures that focus on forward-looking uncertainty measures. Those measures are illustrated in the figures below; broadly speaking, they reveal huge—and varying—uncertainty jumps, including an 80 percent rise (relative to January 2020) in two-year implied volatility on the S&P 500, to a 20-fold rise in forecaster disagreement about UK growth. Also, time paths differ: Implied volatility rose rapidly from late February, peaked in mid-March, and fell back by late March as stock prices began to recover. In contrast, broader measures of uncertainty peaked later and then plateaued, as job losses mounted, highlighting the difference in uncertainty measures between Wall Street and Main Street.
While cautious about predictions, the authors do suggest that such high levels of uncertainty are not conducive to a rapid economic recovery. Elevated uncertainty generally makes firms and consumers cautious, retarding investment, hiring, and expenditures on consumer durables. Given the scale of recent job losses and the collapse in investment, a strong, rapid recovery would require a huge surge in new activity, which unprecedented levels of uncertainty will discourage.
- The Labor Market Collapse
The COVID-19 pandemic hit the US labor market with astonishing speed. For the week ending March 14, 2020, there were 250,000 initial unemployment insurance claims—about 20% more than the prior week, but still below January levels. Two weeks later, there were over 6 million claims, shattering the pre-2020 record of 1.07 million, set in January 1982. As of mid-June, claims remained above one million for 13 consecutive weeks, with a cumulative total of over 40 million. At the same time, the unemployment rate spiked from 3.5% in February to 14.7 percent in April, and the number of people at work fell by 25 million.
Given the rapid nature of these extensive job losses, and the inability of existing labor market information systems to keep up with such changes, the authors devised a measurement method that combines data from traditional government surveys with non-traditional data sources, particularly daily work records compiled by Homebase, a private sector firm that provides time clocks and scheduling software to mostly small businesses. The authors linked this data with a survey answered by a subsample of Homebase employees, as well as other data sources to measure the effects of shelter-in-place orders and other policies on employment patterns from March to early June.
The unemployment rate (not seasonally adjusted) spiked by 10.6 percentage points between February and April, reaching 14.4%, while the employment rate fell by over 9 percentage points over the same period. These two-month changes were roughly 50% larger than the cumulative changes in the respective series in the Great Recession, which took over two years to unfold. Both unemployment and employment recovered a small amount in May, but remain in unprecedented territory.
The authors’ novel methodology delivers insights beyond official statistics. For example, Panel B of the accompanying Figure reveals that total hours worked at Homebase firms fell by approximately 60% between the beginning and end of March, with the bulk of this decline in the second and third weeks of the month—facts that go unrevealed in government data. The largest single daily drop was on March 17, when hours, expressed as a percentage of baseline, fell by 12.9 percentage points from the previous day. The nadir seems to have been around the second week of April. Hours have grown slowly and steadily since then.
The CARES Act, signed into law on March 27 to combat the economic fallout from the COVID-19 pandemic, is the largest economic stimulus in US history. Among its many provisions, CARES also contained several corporate tax breaks. Ostensibly, these tax breaks provided immediate liquidity and incentives for firms to avoid layoffs. However, the tax breaks have received a lot of criticism, with some calling them a “giveaway” to large corporations, and several Democratic politicians have introduced measures to scale them back.
An analysis of SEC filings—in which publicly-traded US firms are required to discuss material events—since the passage of CARES reveals the following:
- Most firms (61%) do not discuss the CARES tax provisions in their filings, suggesting the tax provisions did not materially impact most publicly-traded US firms.
- The most commonly discussed tax provision was the NOL carryback rule, which allows firms to recoup prior taxes paid. While this provision can provide immediate liquidity, it only applies to firms that were unprofitable in the years immediately prior to the pandemic. The other tax provisions were discussed by fewer than 15% of firms.
- The firms that were most likely to discuss the NOL carryback provision were those with pre-pandemic losses and large stock price declines during the pandemic, rather than those operating in states or sectors with large increases in unemployment.
- In contrast, the payroll tax deferral, which was designed to provide liquidity to a broad sample of firms, was more likely to be discussed by firms with more employees and lower cash holdings. And the employee retention credit, intended to encourage firms to keep employees on payroll while they were not working, was more likely to be discussed by firms operating in industries and states with larger unemployment changes. Thus, these two tax provisions appear more likely to benefit firms hardest hit by the pandemic.
- Certain firms (including those that eroded their liquidity with large shareholder payouts and engaged in substantial lobbying during the CARES Act debate) may have avoided discussing these tax breaks in their SEC filings for fear of negative public attention.
The authors acknowledge that firms may benefit from the provisions without discussing them in their SEC filings, and thus the full picture as to how these tax breaks affected U.S. firms will not be clear for some time. However, these early findings cast some doubt on the idea that the CARES corporate tax provisions provided significant liquidity and incentives to retain employees for most publicly-traded U.S. firms. Furthermore, the most frequently discussed tax provision—the NOL carryback—may have primarily benefitted the firms (and their shareholders) whose stock price had deteriorated the most prior to CARES, rather than the firms operating in areas hardest hit by the pandemic.
Using data from ADP¹ one of the world’s largest human resources management companies, to measure changes in the US labor market during the early stages of this “Pandemic Recession,” the authors find that paid US employment declined by about 21% between mid-February and late-April, 2020. Given that US private employment in February was 128 million workers (on a non-seasonally adjusted basis), the ADP data suggest that total paid employment in the US fell by about 26.5 million through late April. As of late May, paid employment is still about 19.5 million jobs below its mid-February levels.
The authors reveal that employment declines were disproportionately concentrated among lower-wage workers: 30% of all workers in the bottom quintile of the wage distribution lost their job, at least temporarily, through May. The comparable number for workers in the top quintile was only 5%. Finally, the authors reveal that businesses have cut nominal wages for about 10 percent of continuing employees, about twice the rate during the Great Recession, while forgoing regularly scheduled wage increases for others.
1 ADP processes payroll for about 26 million US workers each month, representing the US workforce along many labor market dimensions. These sample sizes are orders of magnitude larger than most household surveys that measure individual labor market outcomes at monthly frequencies.
Employment declines during the Pandemic Recession were much larger for businesses with fewer than 50 employees, with closures playing an even larger role for this size group. Businesses with fewer than 50 employees saw paid employment declines of more than 25 percent through April 18, while those with between 50 and 500 employees and those with more than 500 employees, respectively, saw declines of 15-20 percent during that same period, and reached troughs a week or two later than the smallest businesses.
The largest declines in employment were in sectors that require substantive interpersonal interactions. Through late-April, paid employment in the “arts, entertainment and recreation” and “accommodation and food services” sectors (i.e., leisure and hospitality) both fell by more than 45 percent while employment in “retail trade” fell by almost 30%. Businesses like laundromats and hair stylists also saw employment declines of nearly 30%. Despite a boom in emergency care treatment within hospitals, the “health care and social assistance” industry experienced a 16.5% decline in employment through late April.
The spread of COVID-19 has not been uniform across the country. Urban areas have generally seen more aggressive spreads of the virus. These differences manifested themselves somewhat in the labor market as well. There is a strong relationship between the exposure to COVID-19 and employment declines.
While employment fell in all states, the employment declines were largest in those states that had more disease exposure. The authors compare two groups of states: (1) a set of large states that broadly opened in late April or early May (FL, GA and TX), and (2) a set of large states that broadly opened in late May and early June (IL, PA, VA and WA). Looking at employment in the Food and Accommodations Sector for both groups of states, the authors find employment in this sector fell similarly through mid-April in both state groupings. Starting in late April, employment in this sector within the states opening early increased faster than employment in the states opening later. In the states that opened early, however, employment in this sector is still 40 percent below February levels as of mid-May. This suggests that opening does not guarantee employment will fully rebound in these sectors.
The authors also found that employment in these sectors within states that opened later started to increase even prior to those states re-opening. While the increase was modest it showed that demand was increasing even before the states officially re-open. These findings suggest caution by researchers and policymakers alike seeking to link employment gains to re-opening schedules.
Through late April, women experienced a decline in employment that was 4 percentage points larger than men (22 percent for women to 18 percent for men). The gap has grown slightly to 5 percentage points through mid-May. These trends are in sharp contrast to prior recessions where men experienced larger job declines. Why are women being hit harder in the Pandemic Recession? The answer is not clear. One obvious factor is that traditionally female dominated industries, such as retail, leisure and hospitality industries, are being hit harder by the recession. The authors find, however, that less than 0.5 percentage points of the 4-5 percentage point difference in employment losses between men and women can be explained by industry. In other words, across industry sectors, women are experiencing larger job declines relative to men.
More research using household-level surveys with additional demographic variables can explore this critical question. It may be that other factors of the pandemic, such as an increased need for childcare, will explain some portion of the gender gap in employment losses during the recession.
The authors use anonymized bank account information on millions of JPMorgan Chase customers to measure how spending and savings over the initial months of the pandemic vary with household-specific demographic characteristics, like pre-pandemic income and industry of employment. The authors find that most households cut spending dramatically in early March, with declines particularly concentrated in sectors sensitive to government shutdowns and increased health risk, like travel, restaurants, and entertainment. Richer households, who typically spend more in these categories, cut their spending slightly more than poorer households.
Starting in mid-April, after government stimulus checks and expanded unemployment benefits are put in place, spending by poor households recovers more rapidly than spending by rich households. At the same time, poor households also have the largest growth in liquid checking account balances. Thus, poorer households simultaneously have faster growth of spending and savings starting in mid-April, even though they face greater exposure to labor market disruptions and unemployment. This suggests an important role for government transfers in stabilizing income and spending during the initial stages of the pandemic, especially for low-income households. This in turn suggests that phasing out broad stimulus too quickly could potentially transform a supply-side recession driven by direct effects of the pandemic into a broader and more persistent recession caused by declines in income and aggregate demand.
To address the gap in critical, real-time information about COVID-19’s effects on US income and poverty (official estimates will not be available until September 2021), the authors constructed new measures of income distribution and income-based poverty with a lag of only a few weeks, using high frequency data for a large, representative sample of US families and individuals. The authors relied on the Basic Monthly Current Population Survey (Monthly CPS), which includes a greatly underused global question about annual family income, and which allows them to determine the immediate impact of macroeconomic conditions and government policies.
The authors’ initial evidence indicates that, at the start of the pandemic, government policy effectively countered its effects on incomes, leading poverty to fall and low percentiles of income to rise across a range of demographic groups and geographies. Their evidence suggests that income poverty fell shortly after the start of the COVID-19 pandemic in the US. In particular, the poverty rate, calculated each month by comparing family incomes for the past twelve months to the official poverty thresholds, fell by 2.3 percentage points, from 10.9 percent in the months leading up to the pandemic (January and February) to 8.6 percent in the two most recent months (April and May). This decline in poverty occurred despite that employment rates fell by 14 percent in April—the largest one-month decline on record.
This research reveals that government programs, including the regular unemployment insurance program, the expanded UI programs, and the Economic Impact Payments (EIPs), can account for more than the entire decline in poverty that the authors find, and more than half of the decline can be explained by the EIPs alone. These programs also helped boost incomes for those further up the income distribution, but to a lesser extent.
- Expected Rates of Employment Growth and Excess Job Reallocation Rate
Nearly 28 million persons in the US filed new claims for unemployment benefits over the six-week period ending April 25. Further, the US economy shrank at an annualized rate of 4.8% in the first quarter of 2020, and many analysts project it will shrink at a rate of 25% or more in the second quarter. Yet, even as much of the economy is shuttered, some firms are expanding in response to pandemic-induced demand shifts.
By pairing anecdotal evidence from news reports and other sources, along with the rich dataset provided by the Survey of Business Uncertainty (SBU), the authors construct novel, forward-looking measures of expected job reallocation across US firms. The authors draw on two special questions fielded in the April 2020 SBU, one asks (as of mid-April) about the coronavirus impact on own-company staffing since March 1, 2020, and another asks about the anticipated impact over the ensuing four weeks. Responses reveal that pandemic-related developments caused near-term layoffs equal to 12.8 percent of March 1 employment and new hires equal to 3.8 percent. In other words, the COVID-19 shock caused 3 new hires in the near term for every 10 layoffs.
Firm-level sales forecasts show a similar pattern, further supporting the authors’ view that COVID-19 is a major reallocation shock. In addition, the authors’ measure of the expected excess job reallocation rate rose from 1.5% of employment in January 2020 to 5.4% in April. The April value is 2.4 times the pre-COVID average and is, by far, the highest value in the short history of the series.
The authors also draw on special questions put to firms in the May 2020 SBU to quantify the anticipated shift to working from home after the coronavirus pandemic ends, relative to the situation that prevailed before the pandemic. They find that full work days performed at home will triple in the post-pandemic economy. This tripling will involve shifting one-tenth of all full work days from business premises to residences (and one-fifth for office workers). Since the scope for working from home rises with worker earnings, the shift in worker spending power from business districts to locations nearer residences is even greater.
Finally, the authors find that much of the near-term re-allocative impact of the pandemic will persist, as indicated by their forward-looking reallocation measures and their evidence on the shift to working from home. Drawing on special questions in the April SBU and historical evidence of how layoffs relate to realized recalls, they project that 32% to 42% of COVID-induced layoffs will be permanent. The authors also construct projections for the permanent-layoff share of recent job losses from other sources, obtaining similar results.
 The SBU is a monthly panel survey developed and fielded by the Federal Reserve Bank of Atlanta in cooperation with Chicago Booth and Stanford.
- Treasury Yields and Volatility Index (VIX) During the COVID-19 Crisis
During financial crises like in 2008, US Treasuries are typically viewed as the most liquid and safe assets in the world, reflected by their rising prices when markets rush to these relatively secure assets. However, this did not occur in March 2020 during the COVID-19 pandemic. True to script, stock prices fell dramatically, the VIX index of implied stock return volatility spiked, credit spreads widened, and the dollar appreciated. In sharp contrast to previous crisis episodes, though, prices of long-term Treasury securities fell sharply.
What happened? The authors review empirical evidence of investor flows and build a model to shed light on the mechanism behind this episode. Their model introduces repo financing as a key part of dealers’ intermediation activities, through which levered investors obtain funding from dealers who are subject to a balance sheet constraint–the Supplementary Leverage Ratio (SLR)–due to regulation reforms since the 2007–09 crisis. Consistent with their model, the spread between the Treasury yield and overnight-index swap rate (OIS) and the spread between dealers’ reverse repo and repo rates are both highly positive in the COVID-19 crisis, and both greatly negative in the 2007–09 financial crisis.
The observed movements in Treasury yields in March 2020 can be rationalized as a consequence of selling pressure that originated from large holders of US Treasuries interacting with intermediation frictions, including regulatory constraints such as the SLR. Evidently, the current institutional environment in the Treasury market is such that it cannot absorb large selling pressure without substantial price dislocations, or intervention by the Federal Reserve as the market maker of last resort. The safe asset status of US Treasuries’ should not be taken for granted.
- Consumer Visits Over Time by Store Size/Traffic
The steep drop in US economic activity in recent months has been driven in large part by the fall-off in consumer spending at retail stores, restaurants, entertainment spots, and other social venues. This decline in spending has roughly correlated with government shelter-in-place (SIP) orders, and has given rise to fierce debates over “reopening” the economy. Were the various lockdown orders worth the economic pain of slowing the spread of the virus? When, and how fast, should economies reopen?
These questions presume that SIP orders were the primary determinant in keeping consumers at home. However, using data on foot traffic at 2.25 million individual businesses across the United States (including 110 industry groupings), the authors find that while total foot traffic fell by 60 percentage points, legal restrictions explain only around 7 percentage points of this decline. In other words, people were staying home on their own, and when they did go shopping, the authors found that consumers avoided larger, high-traffic businesses. Given the richness of their data set, and described in detail in their accompanying paper, the authors are able to compare, for example, two similar establishments within a commuting zone but on opposite sides of an SIP order. In such a case, both establishments saw enormous drops in customer activity, but the one on the SIP side saw a drop that was only about one-tenth larger.
Interestingly, and further supporting the modest size of the estimated SIP effects, when some states and counties repealed their shutdown orders toward the end of the authors’ sample, the recovery in economic activity due to the repeal was equal in size to the decline at imposition. Thus, the recovery is limited not so much by policy as the reluctance of individuals to engage in social economic activity.
- Productivity's Components: An Example (2008-2016)
The world entered into the COVID crisis in the midst of an unexplained 15-year-long productivity growth slowdown, and the current decline of the world economy raises critical questions about the further trajectory of productivity growth. The authors consider the channels through which the crisis might shift the growth rates of productivity and output, whether up or down.
The authors note that measured productivity is likely to fall in the short run as workers are kept on companies’ payrolls while output declines. However, their concern is a more complete measure of productivity, or one that goes beyond traditional inputs like capital and labor to include any residual growth in output (what economists call total factor productivity, or TFP). Broadly summarized here, the authors describe three components of economy-wide TFP and possible impacts of the pandemic:
- Within-firm productivity growth. Firms build trust among customers and knowledge capital among employees, and both are in danger as the pandemic persists and customer needs go unmet or employees are lost. In addition, higher taxes and/or inflation in the future, as well as trade restrictions, could hamper a company’s recovery.
- Between-firm reallocation (e.g., unproductive firms close and labor and capital shifts to other firms). Small firms are likely to suffer most going forward and are more likely to close permanently. If these smaller firms are more innovative on average, economy-wide productivity growth could slow. Other firms, often larger, will exist primarily through government programs, some of which would otherwise have closed. These “zombie” firms might prevent other, more productive, firms from entering the market.
- Productivity generation created by the pure shifts of activities across sectors. Some sectors, like hotel and travel, may experience persistent drops in activity, while others, like healthcare and IT, may grow over time. The resultant reallocation of resources will have consequences for aggregate productivity, to the extent these sectors differ in productivity and expected productivity growth, and these differences will also occur across countries.
The authors acknowledge that long-term and, possibly, irreversible economic damage may occur from the COVID pandemic, and they urge policymakers to look beyond policies that protect existing businesses, and to enact policies that encourage productivity growth. Globalization, labor mobility, and small firms may all still fall victim to the crisis if the world does not succeed in reopening borders, refraining from trade and currency wars, and focusing on policies to boost productivity. On the upside, the broad adoption of new technologies – such as IT skills during the epidemic – and strong reallocation pressures may provide an independent boost on productivity as we come out of the crisis.
- Expected Dividend and GDP Growth from Dividend Futures
The authors use data from the aggregate equity market and dividend futures to quantify how investors’ expectations about economic growth across horizons evolve in response to the coronavirus outbreak and subsequent policy responses. Dividend futures, which are claims to dividends on the aggregate stock market in a particular year, can be used to directly compute a lower bound on growth expectations across maturities or to estimate expected growth using a simple forecasting model. As of June 8, the authors’ forecast of annual growth in dividends is down 9% in the US and 14% in the EU, and their forecast of GDP growth is down by 2.0% in the US and 3.1% in the EU. As a word of caution, the authors emphasize that these estimates are based on a forecasting model estimated using historical data. In turbulent and unprecedented times, there is a risk that the historical relation between growth and asset prices breaks down, meaning these estimates come with uncertainty.
The lower bound on the change in expected dividends is -18% in the US and -25% in the EU on the 2-year horizon. The lower bound is model-free and completely forward looking. There are signs of catch-up growth from year 4 to year 10. News about economic relief programs on March 26 boosts the stock market and long-term growth but did little to increase short-term growth expectations. Expected dividend growth has improved since April 1 in both the US and the EU.
As of June 8, the expected return on the market has returned to the pre-crisis level. On June 8, the S&P 500 trades at $3232, which is $64 lower than the average price between January 1 and February 19. This drop can largely be explained by the first 7 years of dividends, as they are down by a total of $72. As such, the distant-future dividends, the dividends beyond year 7, must have approximately the same value as before the crisis. If expected long-run dividends are the same as before the crisis, expected returns on the long- run dividends must therefore also be the same as before the crisis. However, interest rates have dropped substantially, which means the expected return in excess of the interest rates is higher than before the crisis.
- Spending Around Stimulus Payments
In response to the economic fallout of the COVID-19 pandemic, the US government has enacted the CARES Act, with over $2 trillion of stimulus measures. Amongst its various provisions, American households under certain income thresholds qualify to receive direct payments in the form of stimulus checks.* How did households respond to this cash infusion?
In updated research, the authors studied households’ consumption and spending behavior responses to the stimulus checks through a multitude of dimensions, using high-frequency, real-time household financial transaction data. By observing 44,460 individuals across the US who received stimulus checks, the authors found that households responded rapidly at first by increasing spending by $0.29 per dollar of stimulus during the first 10 days of observation, primarily on food and non-durable goods, and rent and bill payments. Households with lower incomes, greater income declines, and lower levels of liquidity exhibit relatively stronger spending responses.
Household liquidity plays the most important role in determining spending behavior, with no observed spending response for households with relatively higher levels of bank balances and ready access to funds. Compared to the 2001 recession and 2008 Financial Crisis, the study found relatively little increase in spending on durable goods, with a number of potentially important downstream implications for the economic recovery.
These findings could inform policy formulation and help reduce the time to gauge impact between a policy’s enactment and its implementation. Likewise, further debate is warranted on the timely targeting of stimulus checks, their distribution, and intended effects in jump starting consumer spending to facilitate recovery.
*Individuals earning less than $75,000 get checks worth $1,200, and $2,400 for married couples earning less than $150,000 – each qualifying child entitles the household to an additional $500 of direct payments. Single households earning between $75-99,000 get increasingly smaller checks, and those earning above $99,000 ($198,000 for couples) will not qualify for any stimulus checks.
- Daily Price of Volatile Stocks (PVs)
Financial markets have fluctuated significantly as the COVID-19 epidemic has progressed.These fluctuations likely reflect both the anticipation of a steep drop in corporate earnings, as well as a reassessment of the risk of business investment. It is important to separate these two factors because upward revisions in risk perceptions can themselves reduce investment, deepening and prolonging the recession.
To understand movements in risk perceptions relevant for the macroeconomy in near real-time, the authors employ the “price of volatile stocks” (PVSt)1, which is the book-to-market ratio of low-volatility stocks minus the book-to-market ratio of high-volatility stocks. In previous work, the authors showed that PVSt is low when perceived risk directly measured from surveys and option prices is high. Further, using time-series data from 1970 to 2016, the authors showed that when perceived risk is high according to PVSt, future real investment tends to be lower because the cost of capital is higher for risky firms.
Figure 1 shows a daily time series of the authors’ measure of perceived risk, PVSt, from 1970 and through April 2020. It shows the price of volatile stocks fell sharply – and hence perceived risk rose sharply – as news about COVID-19 was hitting US markets and households in March 2020. PVSt reached its low for the year on April 3, 2020, when it was down 2.6 standard deviations from its level at the start of 2020. While this decline is large, it is comparable to movements in risk perceptions in prior recessions, particularly the downturn following the dotcom bubble in the early 2000s. It is also much smaller than the move in risk perceptions during the financial crisis of 2008-2009. Estimates for the period 1970-2016 indicate that a move in risk perceptions of the size experienced from the beginning of the year until this trough has typically been associated with a drop in the natural real risk-free rate of 3.3 percentage points, and a decline in the ratio of economy-wide capital expenditures to total assets of ratios of 0.91 percentage points (relative to a pre-2016 standard deviation of 1.16%).
Figure 2 provides a close-up view of PVSt and the aggregate stock market during the COVID-19 pandemic (February 14, 2020 through April 30, 2020). The figure shows that PVSt is useful for interpreting individual events during the COVID-19 crisis and often contains information that is distinct from the aggregate stock market. One thing that stands out from this figure is that the steep drop in the aggregate stock market at the end of February left PVStalmost completely untouched, implying that perceptions of risk had not changed significantly. In other words, the evolution of PVSt at the onset of the crisis suggests that investors initially believed there would be a short-term decline in earnings, but did not believe there would be an amplification effect from heightened risk perceptions to the aggregate economy. However, PVSt and the aggregate market began to drop in tandem around March 11, the day the WHO declared COVID-19 a pandemic and wide-spread international travel restrictions were imposed. One possible interpretation for this decoupling and recoupling is that COVID-19 initially appeared to affect only the short-term cash flows of internationally connected firms, whereas the spread of the virus and the associated policy measures imposed in mid-March affected the risk outlook for a much broader swath of the economy. These trends were in turn reflected in the prices of volatile stocks.
Another striking feature of Figure 2 is the large increase in PVSt that began on April 21, 2020, the day that the United States Senate passed the Paycheck Protection Program and Health Care Enhancement Act. The bill provided nearly $500 billion in additional funding to support the CARES Act, much of which was geared towards aiding small and medium-sized businesses. PVSt increased nearly 0.66 standard deviations between the time that the bill was passed in the Senate and when it was signed into law by President Trump on April 24. Interestingly, the market-to-book ratio of the aggregate stock market increased only 0.17 standard deviations over the same time period. The differential response of PVSt and the aggregate stock market to the passing of the bill is consistent with the authors’ previous interpretation that PVSt reflects perceptions of risk that are relevant for privately owned firms, which tend to be smaller and riskier than the larger, less volatile publicly traded firms that dominate the aggregate stock market.
1 As developed in Pflueger, C., E. Siriwardane, and A. Sunderam (2020). “Financial market risk perceptions and the macroeconomy.” Quarterly Journal of Economics, forthcoming.
- Reversing the Curve
As more countries, states, and municipalities begin to reopen their businesses and public spaces in response to the ongoing COVID-19 pandemic, one constant refrain is the warning that we will just get back to square one, with the pandemic running its course and the death toll rising once again, as everyone will get back to normal. But will they? How far might people go in practicing precaution on their own by adjusting their social and economic behavior, without government stay-at-home orders, and how will that affect the economy and the dynamics of the pandemic?
To address this question, the authors developed a simple model based on other recent research, which includes agents (people) who are aware of infection and death risks if they continue to leave their homes to work and to shop, among other activities. Faced with these risks to their own health, they will adjust their behavior. This is a key element of economic models, and is a feature that is not part of standard epidemiological models.
Crucially and in departure from other economic models, the authors assume that the economy is composed of sectors that differ in their infection probabilities. This heterogeneity is simply illustrated, for example, by people’s choice to eat a pizza delivered to their home vs. in a restaurant, or to work at home rather than in an office (if they are among those able to work from home). This heterogeneity matters. The way people choose to “consume” public experiences—whether work, worship, or entertainment—has a profound impact on infection rates.
Broadly summarized, when the authors run their model without heterogeneity in infection risk across sectors, economic activity declines 10%. However, the introduction of heterogeneity mitigates much of that decline. Likewise, the majority of deaths are avoided after the first year, compared to the homogeneous sector version. Importantly, these results are realized without government intervention. One can think of these results as capturing some of the experiences with Sweden’s less-restrictive approach to COVID-19 management. Better, these results are indicative of the unfolding dynamics subsequent to re-opening: a modest rise in infection, a very persistent, but modest decline in economic activity, and a substantial and prolonged shift across sectors, which flexibility of labor markets needs to allow for. This is far from a return to normal, but it is a reasonably optimistic outlook nonetheless.
What explains these outcomes? The authors suggest that infections may decline due to the re-allocation of economic activity that people will make on their own, and the resulting and longer-lasting shift between sectors. For the rather benign outcome in the model and for successful sectoral shifts, it is key that workers can adjust rather quickly to the changing labor market. Food servers can become delivery drivers. Former shop clerks find employment in Amazon warehouses. Artists provide entertainment online. Jobs lost in some sectors get partly offset by recruitment in others.
The authors acknowledge that labor markets do not function as smoothly as they assume in their model. The authors stress that their results are not definitive in and of themselves; models are approximations of reality that depend greatly on the parameters applied by researchers. In this case, the authors concede that the results may appear Panglossian.
However, one need not wear rose-colored glasses to recognize that private incentives can shape behavior during a health pandemic. Most importantly, allowing the economy to succeed in shifting sectoral activities in response to these choices is key for mitigating both the economic as well as the health impact. Consideration of such incentives and sectoral shifts could be important as governments around the world consider strategies to reopen public activities.
- Disclosure Policy: Detected Cases and Deaths in Seoul, South Korea
South Korea’s success in battling COVID-19 is largely due to its widespread testing and contact tracing, but its key innovation is to publicly disclose detailed information on the individuals who test positive for COVID-19. This new research reveals that public disclosure measures are more effective at reducing deaths than comprehensive stay-at-home orders.
The COVID-19 outbreak was identified in South Korea on January 13, and since then South Koreans have received text messages whenever new cases were discovered in their neighborhood, as well as information and timelines of infected persons’ travel. The authors combined detailed foot-traﬃc data in Seoul with publicly disclosed information on the location of individuals who had tested positive. The results reveal that public disclosure can help people target their social distancing, which proves especially helpful for vulnerable populations who can more easily avoid areas with a higher rate of infection.
The authors estimate that over the next two years, the current strategy in Seoul will lead to a cumulative 925,000 cases, 17,000 deaths (10,000 for those 60 and older and 7,000 for ages 20 to 59), and economic losses that average 1.2 percent of GDP. In a model representing partial lockdown, the authors estimate the same number of cases, but deaths increase from 17,000 to 21,000 (14,000 for those 60 and older and 7,000 for ages 20 to 59) and economic losses increase from 1.2 to 1.6 percent of GDP.
Importantly, while death rates among older populations are significantly higher under lockdowns, those under 60 suffer economic losses twice as high, compared to South Korea’s current strategy.
In the absence of a vaccine, the authors conclude that targeted social distancing is much more effective in reducing the transmission of the disease, while minimizing the economic cost of social isolation. However, they also note that these benefits come with a cost: Disclosure of public information infringes upon the privacy of aﬀected individuals. The authors anticipate the day when cost measures for privacy loss are available, after which a full cost/benefit analysis is possible.
- Two Steps to Encourage COVID-19 Tests and Quarantines
Testing for COVID-19 is only as good as compliance. If people don’t show up for testing, or if only symptomatic people show up, then the benefits of such a program will be lost, as “silent spreaders” will go undetected. Indeed, costs could increase under such a scenario if people are encouraged to re-engage in the economy under the false promise of such a testing program.
The question, then, is how to encourage healthy people to stand in line with, possibly, sick people, to undergo an uncomfortable test, and then return in two weeks to do it again, and for many weeks after that. The answer lies at the heart of economics—incentives—and the authors offer a unique suggestion: a COVID lottery (which they coin “Pandemillions”) that gives away large prizes every week to random test participants. On Sunday mornings, for example, states would notify individuals selected for testing that week, and those people would then have until the end of the week to get tested. A completed test would convert into a “ticket” in the lottery, with winners announced every Saturday night.
The benefits of widespread testing would be large, and the federal government could afford to fund a very lucrative prize pool. At $200 million per week, the annual cost of the lottery would only be only $10 billion, or roughly 0.5% of the cost of the CARES Act. As to implementation, while a federal lottery might be optimal, given that 45 states already manage lotteries, the best path forward might be to use existing state infrastructure.
For those who need incentive to quarantine once they test positive, the authors recommend a second plan: offer a $2,000 weekly payment for every American adult compelled to stay home, even if they are asymptomatic. Based on quarantining up to 20 million people this year, the cost would approach $80 billion, a large but still quite modest sum compared to the total costs of this pandemic.
Strong incentives cause strong reactions, and it is possible that some individuals would purposefully try to contract COVID-19 to receive stay-at-home payments; however, the authors believe this number would be sufficiently low and would not come close to outweighing the program’s significant benefits. The authors also acknowledge that while such payments would likely face political hurdles, the high returns from such a program—in morbidity and mortality reductions, and resources saved—would also prove politically attractive.
Absent a vaccine, which is at best a number of months out, the best way to safely reopen the economy is to establish a testing regimen for COVID-19 which ensures that all individuals—both symptomatic and asymptomatic—get tested on a regular basis.
- UI Benefit Replacement Rates
One provision of the CARES Act created an additional $600 weekly unemployment benefit to help workers losing jobs as a result of the COVID-19 pandemic. The authors use micro data on earnings together with the details of each state’s UI system under the CARES Act to compute the entire distribution of current UI benefits and show how replacement rates vary across occupations and states.
The authors find that 68% of unemployed workers who are eligible for UI will receive benefits that exceed lost earnings. The median replacement rate is 134%, and one out of five eligible unemployed workers will receive benefits at least twice as large as their lost earnings. We also show that there is sizable variation in the effects of the CARES Act across occupations and across states, with important distributional consequences. For example, the median retail worker who is laid-off can collect 142% of prior wages in UI, while grocery workers are not receiving any automatic pay increases. Janitors working at businesses that remain open do not necessarily receive any hazard pay, while unemployed janitors who worked at businesses that shut down can collect 158% of their prior wage.
After documenting these basic patterns, the authors explore how various alternative UI expansion policies would alter the distribution of replacement rates. We show how the parameters of various simple UI expansion policies shape the entire distribution of UI benefits across workers and thus provide a lens into how policy choices jointly affect liquidity provision, progressivity, and labor supply incentives.
- Optimal Targeted Closures for NYC
The spread of infectious disease has an important spatial component: When individuals from one neighborhood visit another one they can infect others or get infected. Closure of businesses and public places in a neighborhood could reduce such infection opportunities as well as the import/export of the disease from/to other neighborhoods. How should a city target closures to achieve an appropriate policy goal at the lowest possible economic cost, factoring in neighborhood spillovers and the differences among neighborhoods’ economic values?
To answer this question, the authors focus on the policy goal of reducing infections in all neighborhoods, and provide an optimization framework that delivers the optimal targeted closure policies. They then use mobile-phone data (from a period prior to lockdowns) to estimate individuals’ movements within NYC and, applying their framework, the authors reveal the following:
- Targeted closures could achieve the aforementioned policy goal at up to 85% lower economic cost than the uniform city-wide closures.
- Second, coordination among counties and states is extremely important. It may be infeasible for NYC to achieve the policy goals and curb the spread of the epidemic unless the neighboring counties (e.g., those in New Jersey) also impose appropriate economic closure measures.
- Third, the optimal policy promotes some level of economic activity in Midtown, while imposing closures in many neighborhoods of the city.
- Finally, contrary to likely intuition, the neighborhoods with larger levels of infections are not necessarily the ones targeted for the most stringent economic closure measures.
- COVID Cases, Lockdown, and Mobility
Using customized large-scale surveys, this work provides real-time estimates on the changing economic landscape following lockdowns. The authors find that consumer spending for a typical US household dropped by $1,000 per month, which corresponds to a 31% drop in overall spending. Households also spent substantially less on discretionary expenses and decreased their planned spending on durables, with an average drop in spending on durables of almost $1,000.
Strikingly, they find one of the largest drops occurring for debt payments. This result highlights the possibility of a wave of defaults in the next few months, which could ultimately affect the financial system, slow the economic recovery and explain the recent increase in loan provisions by major US banks.
In line with these negative outcomes at the individual level, households’ macroeconomic expectations have become far more pessimistic. Average perceptions of the current unemployment rate increased by 11 percentage points, with similar magnitudes for expectations of unemployment over the next three to five years, indicating that households expect the downturn to have persistently negative effects on the labor market. Consistent with this view, inflation expectations over the next twelve months dropped sharply on average while uncertainty increased. Current mortgage rate perceptions as well as expectations for the end of 2021 dropped on average by about 0.4 percentage points with even larger drops in average expectations over the next five to ten years.
The negative effect on long-run expectations suggests that the lower bound on nominal interest rates might be a binding constraint for monetary policymakers for the foreseeable future. Increased uncertainty at the household level and the large drop in planned spending point toward some form of liquidity insurance to curb the desire for precautionary spending and stimulate demand once local lockdowns are lifted.
Finally, to assess the economic damage that households attribute to the virus, the authors elicited information on the perceived financial situation of the survey participants and possible losses due to the coronavirus, both in income and wealth. Forty-two percent of employed respondents reported having lost earnings due to the virus with an average loss of more than $5,000. More than 50% of households with significant financial wealth reported having lost wealth due to the virus and the average wealth lost is at $33,000. This decline in wealth is putting further downward pressure on future consumption.
Using data from ADP one of the world’s largest human resources management companies, to measure changes in the US labor market during the early stages of this “Pandemic Recession,” the authors find that paid US employment declined by about 22% between mid-February and mid-April, 2020. This translates to a reduction in US employment of about 29 million workers as measured in the payroll data. In no prior recession since the Great Depression has US employment declined by a cumulative 2% during the first three-months of the recession (Chart 1). Across all prior recessions since the 1940s, peak employment declines were never more than 6.5%. The US economy has already experienced a 22% decline in employment during the first month of this recession (Chart 2).
Among other important findings, the authors reveal that employment declines were disproportionately concentrated among lower-wage workers: 35% of all workers in the bottom quintile of the wage distribution lost their job, at least temporarily, during the first month of the recession. The comparable number for workers in the top quintile was only 9% (Chart 3). This implies that over 36% of the 29 million jobs lost during the first four weeks of this recession were concentrated among workers in the lowest wage quintile. Job declines were larger in-service industries (such as leisure and hospitality) and in smaller firms, which disproportionately employ lower-wage workers (Chart 4).
The recession is having a disproportionate effect on small firms and lower-skilled workers: precisely those without the cash flow and savings to smooth consumption. The longer the recession persists, the greater the likelihood that lower wage workers may suffer the disproportionate brunt of the recession.
 ADP processes payroll for about 26 million US workers each month, representing the US workforce along many labor market dimensions. These sample sizes are orders of magnitude larger than most household surveys that measure individual labor market outcomes at monthly frequencies.
- Who Has Born the Risk of Job Loss?
Social distancing policies have led to many workers losing their jobs, at least temporarily, and the burden of job loss has mostly fallen on economically vulnerable workers. New research reveals that employment losses are around four times larger for workers without a college degree, one and half times larger for non-white workers, and five times larger for workers in the bottom half of the income distribution (see figure). This is related to the characteristics of the jobs of these types of workers. Poor and economically disadvantaged workers are more likely to be employed in jobs that are less likely to be conducted from home. These jobs also tend to rank highly in terms of the amount of close physical interaction that occurs at work (e.g., a nail salon worker). Combined, these results imply that workers that have been hurt most by the crisis economically, are also at the highest health risk as they go back to work.
- Business Shutdown
This paper takes an early look at a large and novel small business support program that was part of the initial crisis response package, the Paycheck Protection Program (PPP).
First, we find no evidence that funds flowed to areas that were more adversely affected by the economic effects of the pandemic, as measured by declines in hours worked or business shutdowns. If anything, we find some suggestive evidence that funds flowed to areas less hard hit. The fraction of establishments receiving PPP loans is greater in areas with better employment outcomes, fewer COVID-19 related infections and deaths, and less social distancing.
Second, lender heterogeneity in PPP participation appears to be one reason why we find a weak correlation between economic declines and PPP lending. For example, we find that areas that were significantly more exposed to banks whose PPP lending shares exceeded their small business lending market shares received disproportionately larger allocations of PPP loans. Underperforming banks—whose participation in the PPP underperformed their share of the small business lending market—account for two-thirds of the small business lending market but only twenty percent of total PPP disbursements. The top-4 banks alone account for 36% of the total number of small business loans but disbursed less than 3% of all PPP loans.
Our results highlight the importance of banks as a conduit for public policy interventions. Measuring these responses is critical for evaluating the social insurance value of the PPP and similar policies.
- Size of the Indirect Effect of Reduced Commerzbank Lending
The COVID-19 pandemic initially led governments to shut down a few sectors, for example the service, hospitality, and travel industry. Huber’s 2018 study highlights that such disruptions can harm the entire economy, even if they initially only affect a few companies. To make this point, Huber shows that Commerzbank, one of Germany’s largest banks, cut lending to its German borrowers during the 2008-09 financial crisis. The lending disruption reduced the growth of companies that relied directly on loans from Commerzbank.
Importantly, the disruption also affected companies and employees that had no direct relationship with Commerzbank. Indirectly affected companies experienced spillover effects due to both a general decline in demand and a temporary lack of innovation at directly affected companies. When Commerzbank’s customers made job cuts, overall household consumption fell, which then affected revenue and employment at other companies. Further, declining research-and-development activities at directly affected companies spilled over to other companies, thus slowing overall productivity growth. The employment of indirectly affected companies remained low even beyond the duration of the initial lending disruption.
These findings may apply to the current economic shock due to the COVID-19 pandemic. For example, if directly disrupted companies fire workers, those workers will spend less, which will spill over to negatively affect other firms. Moreover, the economic harm of the current crisis may last longer than the actual disruption due to COVID-19.
- Truck Flows Among Provincial Chinese Capital Cities
The Chinese government ended the 76-day lockdown of Wuhan on April 8, 2020. Outside Wuhan, many local governments had already eased restrictions on movement and shifted their focus to reviving the economy. In this work, the authors document the post-lockdown economic recovery in China. The main findings are summarized as follows:
- Official statistics suggest a quick recovery in manufacturing, which is corroborated in non-official data on city-to-city truck flows (see Figure 1) and air pollution emissions (see Figure 2).
- Electricity consumption, retail sales and catering income suggest a much more persistent output decline in services. Business registration data also show less firm entry in services.
- There is huge cross-region heterogeneity, with the southeast region experiencing the strongest initial recovery, according to the authors’ data.
- Small businesses were hit hard, with February sales down 35% from 2019, and they grew slowly in March. April will be the key month to determine the recovery speed.
- How Negative Supply Shocks Can Lead to Demand Shortages
Understanding the nature of a negative economic shock is key to getting the policy prescription right. After ensuring that households have enough short-term resources, policymakers are confronted with the following conundrum: Should the aim of policy be to encourage people to spend more, that is to provide stimulus, or should policy focus purely on providing forms of social insurance?
The authors’ key insight is that the coronavirus shock is a supply shock of a special nature, as it affects different sectors unevenly. The central argument of their work is that the coronavirus shock will likely cause a reduction in aggregate demand larger than the original reduction in labor supply, something that the authors coin a “Keynesian supply shock.” Their work describes two forces that propagate the shock from those it directly affects, or those in affected (or contact-intensive) sectors, to those in less affected sectors: complementarities across sectors and incomplete markets. In the first case, when people are restricted from spending on certain goods, like restaurants and events, they do not spend the same amount on other complementary goods and services, and there is less overall spending
In the second case, the overall reduction in spending spreads to unaffected sectors because those who retain their jobs do not spend enough to prevent this occurrence (in economists’ parlance, the marginal propensity to consume of those in the unaffected sectors is less than those in affected sectors). Together, these two forces transform the original supply shock into a demand shock.
The authors’ findings pose challenges for policymakers, as a “typical” increase in government consumption may be less powerful in a pandemic shock. The reason is that government spending can only lift incomes in the unaffected sectors, not in the affected sectors, but it’s the workers in the affected sectors who have the highest propensity to consume, and they are exactly those who cannot benefit from an aggregate spending increase. On the other hand, fiscal stimulus can be desirable when combined with polices more targeted towards the workers in the affected sectors.
- Device Exposure is Down by Two-Thirds
Throughout the United States, large swathes of economic activity and social life have been paused due to the pandemic. Data based on smartphone movements reveal this abrupt shift and can be used to study—almost in real-time—how people are altering their behavior during the coronavirus pandemic. A team of economists from five different universities that includes Chicago Booth’s Jonathan Dingel has published indices derived from anonymized phone data to allow researchers to use this information.
One of the team’s indices describes a device’s exposure to other devices due to visiting the same commercial venue. This daily device exposure index (DEX) reports the average number of distinct devices that also visited any of the commercial venues visited by a device on that day. Nationwide, the DEX declined dramatically over the month of March. By late March, device exposure was about one-third the level typically observed in February.
Thanks to the smartphone data’s rich detail, device exposure can be measured on a daily basis for more than 2,000 US counties. While exposure is down by two-thirds on average, there is considerable variation in the degree of isolation across US cities. On April 3, the device exposure indices in New York City and Las Vegas were merely one-tenth their Valentine’s Day levels. By contrast, the DEX for Cheyenne, Wyo., declined by only 40%. Across metropolitan areas, the decline in device exposure was greater in cities where a larger share of jobs can be done at home.
While the correlation between reduced device exposure and a greater share of jobs that can be done at home does not establish a causal relationship, this finding illustrates just one of numerous questions that can be investigated using these exposure indices made available to the global research community by the team of economists. The data are available online at https://github.com/COVIDExposureIndices/.
Most states and cities in the US have shut all non-essential businesses in response to COVID-19. In this note, we argue that as policies are developed to “re-open” the economy and send people back to work, strategies for childcare arrangements, such as reopening schools and daycares, will be important. Substantial fractions of the US labor force have children at home and will likely face obstacles to returning to work if childcare options remain closed. Younger workers, who might be able to return to work earlier to the extent that they are less susceptible to the virus, are also more likely to require childcare arrangements in order to return to work.
Using 2018 data from the Census Bureau’s American Community Survey, we calculate the share of employed households who are affected by childcare constraints. We focus on the civilian employed population older than 18.
The first row in Table 1 shows that 32% of that workforce has someone in their household who is under 14. Thus, 50 million Americans must consider childcare obligations when returning to work. Daycares and preschools might open sooner than primary schools, since they tend to have fewer children and thus less scope for disease transmission, so the remaining columns of Table 1 distinguish children under 6 and those 6-14 years old. For about 30% of the workforce with childcare requirements, all of their children are under the age of 6. Thus, opening daycares alone could address childcare obstacles for one in three constrained workers.
Of course, many workers with children at home are not sole caregivers. Workers who live in a household with another non-working adult – such as a partner who is not employed, a retired parent or in-law, or an older child above 18 who lives at home – can likely return to work while another household member addresses their childcare needs. The second row of Table 1 reports the share of all workers who live in a household with someone under 14 and no available caregiver. If non-working adults can assume household childcare responsibilities, 21% of the workforce would nonetheless have unaddressed childcare obligations.
Although 21% of the workforce will face some childcare burden when schools and daycares remain closed, some of them may resume work while other workers in their household address childcare needs. In particular, many workers with children live in households with other workers. Each household would potentially only need one adult to remain home with the children, freeing up the other adults to return to work. The third row of Table 1 shows that accounting for these childcare options leaves 11% of the workforce (or 17.5 million workers) facing major barriers to work if schools and daycares remain closed.
The White House and various other commentators have proposed a phased reopening of the economy in which initially only younger, less vulnerable workers return to work (https://www.whitehouse.gov/openingamerica/). Schools, daycares, and camps are proposed to open in later phases. Since older patients are more vulnerable to COVID-19, this would potentially balance the health risks for the most at-risk population while promoting economic activity. However, the obstacles to returning to work imposed by school closings are somewhat higher for the under 55 population, because 40% of these workers have a child at home. Table 1 shows that 14% (or roughly one in seven) of workers under 55 would likely face childcare-related obstacles to returning to work (even after accounting for the fact that in this scenario, workers over 55 in the household could then provide childcare). Under a policy where young workers return to work while schools remain closed, 35 million workers who are over 55 would not be able to return to work and another 16 million who are under 55 would be constrained by childcare obligations.
The obstacles that childcare imposes on workers during the COVID-19 crisis is similar across industries. Table 2 shows the key statistics for each broad industry category: the share of workers without within-household child care would only range from 18% in transportation to 25% in education and health care.
Figure 1 depicts spatial variation in the share of workers with childcare obligations and no available caregiver in their household. While this figure is as low as 13% and as high as 33% for some commuting zones, the vast majority of regions are near the national average of 21%. Thus, addressing childcare obligations as part of “re-opening” strategies is an important consideration for policymakers across the United States.
These results suggest that childcare-related constraints imposed by school closings should feature prominently in discussions of reopening the economy. While there is scope for a large rebound in employment even if schools and daycares remain closed, the economy will remain 17 million workers short of normal employment in this scenario. Furthermore, many of those working when schools are closed will only be able to do so if a spouse or partner or who would typically be working instead remains home. The longer school closures persist into the recovery of the economy, the greater will be the burden faced by those workers with young children and no obvious childcare options. We again note that we are making no attempt to evaluate any public-health benefits of school closures or make any assessment of when schools should be reopened. Public-health policies that mitigate the spread of the virus likely have high returns for the ultimate shape of any economic recovery. We instead simply note that discussions of returning to work ought to include discussion of returning to school.
Alon, Titan, Matthias Doepke, Jane Olmstead-Rumsey, and Michele Tertilt. “The Impact of COVID-19 on Gender Equality”, Covid Economics: Vetted and Real-Time Papers, Issue 4, April 14 2020.
 We explicitly refrain from any evaluation of public-health considerations related to school closures since we have no expertise in this area. We instead seek to focus solely on measuring economic constraints that arise in a phased employment recovery. It is entirely possible that these constraints may be unavoidable for public-health reasons.
 Alon, Doepke, Olmstead-Rumsey and Tertilt (2020) use ACS data to compute a number of closely related statistics, but they focus on measuring household childcare burdens while we use employed workers as our unit of analysis and focus specifically on measuring the importance of childcare constraints for aggregate, regional, and industry employment.
- Estimated Paycheck Protection Program (PPP) Cost by Industry
The initial allotment for the Small Business Administration’s Paycheck Protection Program (PPP) was $349 billion, and was meant to cover primarily employee costs—including some funds for utilities, rent, and mortgage interest—for approximately eight weeks. However, many in Congress now deem this insufficient and the Treasury Dept. has requested an additional $250 billion, bringing the potential total to $599 billion. This begs the question: How many applications could be submitted and how big should PPP be? The authors calculate that maximum requests could total $720 billion (updated 4/16) if all small businesses in the US apply.
To make their calculations (see Paycheck Protection Program Calculation Tool online), the authors determined two pieces of information: the number of eligible businesses, and those businesses’ monthly payroll costs, including salaries, wages, retirement, and benefits. Eligible businesses include those with less than 500 employees, with an exception for larger businesses in the accommodation and food service sectors. In sum, the authors calculate about $3.4 trillion in total estimated payroll cost for the purposes of PPP that, when divided by 12 and multiplied by 2.5 to get the total eligible loan amount, comes to about $720 billion.
If Congress decides to increase the pool of funds to $600 billion, the PPP should be at least close to sufficiently funded to fulfill all application requests, mitigating the problems of the “first-come, first-served” design. However, it is also true that at $600 billion, Congress and taxpayers would not just fund a subset of small businesses in need, but would instead fund nearly the entire payroll for all small businesses for two months.
- How Long Will This Last? Fraction Who Believe Crisis Will End Before Each Date
Small businesses account for nearly 50 percent of US workers, and this new survey of nearly 6,000 firms reveals the financial fragility of many of those businesses and signals a cautionary note for policymakers, as most respondents expect the crisis to extend beyond the spring and well into the summer.
The late-March 2020 survey focused on assessing small businesses’ current financial status, the extent of temporary closures and laid-off employees, duration expectations and the impact on decision-making, and whether businesses planned to apply for CARES Act funding and how such a decision could impact closures and lay-offs. Broadly, the survey revealed the following:
- Disruption to US small businesses is severe, with 43% of the respondents temporarily closed. Employee reductions stood at 40% across all respondents. Regionally, mid-Atlantic states, including New York City, reported closures of 54% and layoffs of 47%. Industry responses varied widely, with service sector firms reporting employment declines over 50 percent.
- Many US small businesses are standing on financially shaky ground, with the median firm with expenses over $10,000 per month retaining only enough cash to last for two weeks. For 75% of respondents, there was only enough cash to cover expenses for two months or less.
- US small businesses are widely uncertain about when the crisis will end, with half expecting the crisis to persist into mid-summer, meaning that many firms expect this economic challenge to persist well beyond their available cash levels.
For policymakers, the following results are particularly salient:
- More than 13% of respondents did not plan to seek CARES Act funding because of application hassle, distrust that loans will be forgiven, and eligibility complexity.
- If the crisis extends beyond four months, many firms—especially many in the service industries—do not expect to remain viable.
- Extrapolating the 72 percent of businesses that would apply for CARES Act funding, and assuming all businesses would request maximum loans (2.5 months of expenses), the total volume of loans from all US businesses would approach about $410 billion, beyond the $349 allocated in the CARES Act at the time of the survey.
- Varying Income Levels by County (2016)
Shelter-in-place policies reduce social contact and risks of interpersonal COVID-19 transmission. Though the economic consequences of these policies are substantial, local non-compliance creates public health risks and may cause regional spread. Understanding the drivers of what enhance or mitigate compliance is a first order public policy concern.
Clarifying these mechanisms provides actionable insights for policy makers and public health officials responding to the COVID-19 pandemic.
In our paper, we find a significant decline in population movement after the local shelter-in-place policies were enacted. Second, an increase in local income enhances compliance. Third, tariff-induced economic dislocation and higher Trump vote shares in 2016 reduce compliance. Finally, exposure to slanted media reduces compliance, consistent with the impact of information sources that downplayed the danger of COVID-19.
- Estimated Reported Infections by County
The novel coronavirus outbreak was declared a national emergency in the US beginning March 1, 2020, with states imposing various levels of lockdown measures. By April 13, there were nearly 550,000 confirmed cases in the US, with deaths approaching 22,000. While this is clearly a major health crisis, the country is also facing a deep and possibly long-lasting economic recession. One crucial question looming over both the health and economic effects is how many people have actually contracted COVID-19 and the actual mortality rate; that is, while the number of confirmed cases is known, there are likely a large number of cases that have not been confirmed and, likewise, some deaths that have not been attributed to COVID-19.
To address this crucial knowledge gap, the authors have developed a unique strategy to estimate the likely real impact of the COVID-19 pandemic on the US. This strategy is based on the variation in travel from the epicenter of an outbreak to other locations that were not previously infected. Through a series of estimates based on known infection rates and expected rates of transmission, and incorporating the likely effect of travel from an epicenter of an outbreak to other areas, the authors estimate the percentage of unreported cases. The results are striking: for example, on March 13, across major metro areas, the authors estimate that on average only 4.16% of total infections were reported across the US with an eight-day reporting lag, meaning that for every case there were 23 unreported cases. The range of results across model assumptions and time periods utilized vary between 6 to 24 unreported cases.
Finally, while the authors stress that their results are dependent on strong assumptions and reliable data, they believe their methodological strategy is a solid start that can fuel additional research.
The authors focus on three key variables: the employment-to-population ratio, the unemployment rate, and the labor force participation rate. Historically, the employment-to-population ratio and the unemployment rate are near reverse images of one another during recessions, as workers move out of employment and into unemployment. More severe recessions also sometimes lead to a phenomenon of “discouraged workers,” in which some unemployed workers stop looking for work. These workers are reclassified as “out of the labor force” by Bureau of Labor Statistics (BLS) definitions, so the unemployment rate can decline along with the labor force participation rate while the employment-to-population ratio shows little recovery.
The authors figures, based on survey data from Coibion et al. (2020), document the following three facts. First, the employment-to-population ratio has declined sharply from 60% down to 52.2% (Panel B). This decline in employment is equivalent to 20 million people losing their jobs and is larger than the entire decline in the employment to population ratio experienced during the Great Recession. Second, the unemployment rate rose from 4.2% to 6.3% (Panel A). While this increase is the single biggest discrete jump in unemployment over the last 15 years, this change in unemployment corresponds only to about one-third of the increase observed during the Great Recession. For comparison with the employment-to-population ratio, if all twenty million newly unemployed people were counted in the unemployment rate, there would have been an increase in the unemployment rate from 4.2% to 16.4%, the highest level since 1939. Third, the reason for the discrepancy between the two is that many of the newly non-employed people are reporting that they are not actively looking for work, so they do not count as unemployed but rather as exiting the labor force. The labor force participation rate dropped from 64.2% to 56.8% (Panel C). Our survey evidence suggests that 6 percentage points of the decline and, hence, almost the entire decrease can be explained by people moving out of the labor force into retirement.
- Time Paths Under Baseline Parameters
The typical approach in the epidemiology literature is to study the dynamics of the pandemic–for infected, deaths, and recovered–as functions of some exogenously chosen diffusion parameters, which are in turn related to various policies, such as the partial lockdown of schools, businesses, and other measures of diffusion mitigation. We use a simplified version of these models to analyze how to optimally balance the fatality induced by the epidemic with the output costs of the lockdown policy. The planner’s objective is to minimize the present discounted value of fatalities while also trying to minimize the output costs of the lockdown policy.
In our baseline parameterization, conditional on a 1% fraction of infected agents at the outbreak, the possibility of testing and no cure for the disease, the optimal policy prescribes a lockdown starting two weeks after the outbreak, covering 60% of the population after one month. The lockdown is kept tight for about a full month, and is subsequently gradually withdrawn, covering 20% of the population three months after the initial outbreak. The output cost of the lockdown is high, equivalent to losing 8% of one year’s GDP (or, equivalently, a permanent reduction of 0.4% of output). The total welfare cost is almost three times bigger due to the cost of deaths. The intensity of the lockdown depends on the gradient of the fatality rate as a function of the infected, the value of a statistical life, and the availability of testing. We find that an antibody test, which allows to avoid lockdown of those immune, improves welfare by about 2% of one year’s GDP.
- Equity Returns for U.S. Life Insurance Sector During COVID-19 Crisis
The stock prices of life insurance companies declined sharply during the onset of the COVID-19 crisis. To illustrate this, the figure reports the drawdown, defined as the percent decline from the maximum to the minimum of the cumulative return index, from January 2 to April 2, 2020. The drawdown of a portfolio of variable annuity insurers is -51% during this period. This is a substantially larger drawdown than the S&P500 (-34%), the financial sector more broadly (-43%), and rivals the airline industry (-62%). Some of the most affected companies experienced a drawdown of -65% or more (e.g., AIG, Brighthouse, and Lincoln). While this apparent fragility may be concerning in general, the solvency of life insurance companies that safeguard a large share of long-term savings and insure health/mortality risks is particularly important during a pandemic.
It may be tempting to conclude that life insurers experienced large losses due to the high death toll of the coronavirus, but this is not necessarily the case, as annuities represent a large fraction of insurers’ liabilities and insurers and, in fact, profit from those contracts if the policyholders die unexpectedly early. Instead, the fragility is the result of various insurance products with that come with minimum return guarantees. The traditional role of life insurers is to insure idiosyncratic risk through products like life annuities, life insurance, and health insurance. With the secular decline of defined benefit pension plans and Social Security around the world, life insurers are increasingly taking on the role of insuring market risk through minimum return guarantees. In the US, life insurers sell retail financial products called variable annuities that package mutual funds with minimum return guarantees over long horizons. Variable annuities have become the largest category of life insurer liabilities, larger than traditional annuities and life insurance.
From the insurers’ perspective, minimum return guarantees are difficult to price and hedge because traded options have shorter maturity. Imperfect hedging leads to risk mismatch that stresses risk-based capital when the valuation of existing liabilities increases with a falling stock market, falling interest rates, or rising volatility.
The fragility is not new to the current crisis. During the 2008 financial crisis, many insurers including Aegon, Allianz, AXA, Delaware Life, John Hancock, and Voya suffered large increases in variable annuity liabilities ranging from 27% to 125% of total equity. Hartford was bailed out by the Troubled Asset Relief Program in June 2009 because of significant losses on their variable annuity business. Risk mismatch between general account assets and minimum return guarantees leads to negative duration and negative convexity for the overall balance sheet and poses a challenge for life insurers in the low interest rate environment after the financial crisis. As a consequence, the stock returns of US life insurers have significant negative exposure to long-term bond returns after the financial crisis.
The persistent low-rate environment in combination with declining interest rates, widening credit spreads, and increased volatility will be a challenge to the balance sheet of life insurers in the foreseeable future.
- Share of Jobs That Can Be Done from Home by GDP
Building on previous work to determine how many US jobs can be performed at home, the authors produce new estimates for 86 other countries. Their analysis reveals a clear positive relationship between income levels and the shares of jobs that can be done from home. For example, while fewer than 25 percent of jobs in Mexico and Turkey could be performed at home, this share exceeds 40 percent in Sweden and the United Kingdom. The striking pattern suggests that developing economies and emerging markets may face an even greater challenge in continuing to work during periods of stringent social distancing.
The authors conduct their analysis by merging their classification of whether each 6-digit SOC (Standard Occupation Classification) can be done at home based on the US O*NET surveys with the 2008 edition of the international standard classification of occupations (ISCO) at the 2-digit level.
The figure plots the author’s measure of the share of jobs that can be done at home in each country against its per capita income. They compute the jobs share using the most recent employment data available from the International Labour Organization (ILO) after restricting attention to countries that report employment data for 2015 or later. The income measure is GDP per capita (at current prices and translated into international dollars using PPP exchange rates) in 2019, obtained from the International Monetary Fund. They note that their classification assesses the ability to perform a particular occupation from home based on US data and that the nature of an occupation likely varies across economies with different income levels.
- Social Distancing Behavior and Political Polarization — Trump Vote Shares
Since the purpose of social distancing is to reduce the spread of a virus, in this case COVID-19, it matters greatly whether people believe in the need to take such precautions. If people infer lower risk from the same set of facts (e.g., population density, case counts and deaths), they may impose unnecessary health risks on others. Given the political divide in the US and how individuals consume news and information, the authors of this new research examine whether political partisanship affects the risk perceptions of individuals during the ongoing COVID-19 pandemic of 2020.
The authors use a number of measures to explore the effects of political partisanship on pandemic risk perceptions and, among other revealing insights (regarding, for example, pandemic-related internet searches), they find that while a higher incidence of confirmed COVID-19 cases results in a reduction in daily distance traveled, this effect is muted in counties that favored Donald Trump in the 2016 presidential election. For example, with a doubling of the number of confirmed COVID-19 cases in a county, the percent change in average daily change in distance traveled falls by 4.75 percentage points. However, for this same doubling in cases in a county, a one standard deviation increase in Trump voter share mutes this effect by 0.5 percentage points. Similar patterns are revealed when the authors examine the change in daily visits to non-essential businesses—residents in counties that favored Trump took more non-essential trips.
One of the provisions of the new stimulus bill is called Pandemic Unemployment Assistance, which will extend unemployment benefits to self-employed workers, including gig workers. This is very different from the response in the Great Recession, when UI was not extended to the self-employed. While todays’ provisions are not completely unprecedented—they are largely based on the 1974 Disaster Unemployment Act—nothing like this has ever happened at this scope and scale. The author’s new research on gig work provides some insight into how many gig workers might be newly eligible for new Unemployment Insurance.
In research examining administrative tax records, Koustas and his co-authors find that around 11% of the workforce engages in some type of gig work. If we define gig work as all independent contract/ freelancing, most gig work is not at all new (see Figure 1). While gig work has grown over the last few years, almost all of the recent growth has come from work mediated via new online platforms, the largest component of which are ridesharing platforms.
Around 60% of gig workers do this work as a “side-gig,” holding a “regular” job as a traditional employee. This share rises to 81% in the online platform economy. For these workers, unemployment benefits eligibility will almost certainly be determined based on their main, non-gig job. Still, millions of gig-only workers might now be eligible for benefits, represented by the yellow line in Figure 1 below.
While gig work in the online platform economy is concentrated in urban areas, the highest concentration of gig work is actually in more rural areas of the plains and Southern states, reaching 20% or more of all work in some counties (see Figure 2). These geographic patterns are important because implementation and eligibility verification for the new UI benefits will be left to the states.
As a result of the scale of the current crisis, as well as the lack of precedent and federal guidance on how to verify gig and self-employment income, state governments are likely to face novel challenges that will mean delays and barriers for workers eligible for benefits.
- Lease Amendment for Rent Relief
With mandated shutdowns of most non-essential businesses, the great majority of small businesses in the United States are under serious economic strain. As rents become due, many will fail to make their payments, resulting in mass defaults. This is harmful to the tenants, our small business community, but also to the landlords who value and rely on these long-term relationships. In typical times, landlords would work with tenants to work out alternatives before moving forward with eviction proceedings. But these processes can be timely and expensive.
While the Coronavirus, Aid, Relief and Economic Security (CARES) Act offers forgivable loans to help small businesses cover their expenses, millions of businesses may not survive the time it takes for the transaction. A customizable, one-page lease addendum, drafted by the authors in coordination with legal and business input, provides a simple tool for appending to and modifying any commercial lease. The authors recommend that tenants pay only 10 percent of their usual rents during the relief period, with a further recommendation that 90 percent be deferred and 10 percent permanently forgiven by the landlord.
For more information visit: https://centerforrisc.org/lease
- Characteristics of Workers Who Generally Cannot Work from Home
Absent a vaccine or widespread testing, “social distancing,” which requires employees in many jobs to work from home, is the best policy option to reduce the spread of COVID-19. This suggests that returning to work will likely occur more slowly for jobs that require a large degree of proximity to other individuals, such as those who work in closely arranged cubicles. So, who are the workers who do not have the opportunity to work from home and, therefore, are at greater risk of infection?
Building on recent work that describes the type of jobs that allow for working at home and merging multiple datasets, the authors of this new research compare the characteristics of individuals in various occupations who cannot work from home to those of workers in occupations that can work from home. Individuals in occupations that cannot be done from home are:
- economically more vulnerable,
- less likely to have a college degree,
- less likely to have health insurance,
- likely nonwhite,
- likely to work at a small firm,
- likely to rent, rather than own, their home,
- and more likely born outside the United States.
An understanding of how individuals vary across occupations, and the likely impact of such strategies as social distancing, is important for policymakers considering how to best target economic policies designed to assist workers.
 See related fact on this page and BFI White Paper in this series, “An SEIR and Infectious Disease Model with Testing and Conditional Quarantine”
 See related fact on this page and BFI White Paper in this series, “How Many Jobs Can Be Done At Home?”
- Average Daily Household Spending in 2020
In a new study, the authors use de-identified data from a non-profit Fintech to study how US household spending responded to the COVID-19 crisis. Households dramatically changed their spending as COVID-19 spread. As cases began to spread in late February, spending increased sharply, indicative of households stockpiling goods in anticipation of a higher level of home-production, an inability to visit retailers, or shortages. Total spending rose by approximately half between February 26 and March 11, when a national emergency was declared and as cases grew throughout the country. There is also an increase in credit card spending, which could indicate borrowing to stockpile goods. Between the imposition of a national emergency and many states and cities issuing shelter-in-place orders starting on March 17, there are elevated levels of grocery spending. These patterns continue through the month of March.
The authors use the rich dataset to characterize heterogeneity across spending categories, demographics, income groups and partisan affiliation. There are very sharp drops in restaurants, retail, air travel, and public transport in mid to late March. The decrease in spending was not consistent across all categories, e.g., grocery spending increased, as did food deliveries. Despite increases in some categories, total spending dropped by approximately 50%.
Men stockpile slightly less, and families with children stockpile more than other households. Younger households stockpile later than other households. There is little heterogeneity across income—although our sample is skewed toward lower income individuals. Cell phone records indicate differences in social distancing between political groups—individuals in states with more Trump voters were much more likely to move around in mid and late March. Republicans stockpiled more than Democrats, purchasing more on groceries in late February and early March. Republicans were spending more in retail shops and at restaurants in late March, which may reflect differences in beliefs about the epidemic’s threat, or differential risk exposure to the virus.
- Welfare Effects of Closing Non-essential Businesses
Government officials around the world have ordered businesses shut and families to stay in their homes except for essential activities. This fact estimates the opportunity costs of lockdown relative to a normally functioning economy.
National income accountants have found that adding a nonwork day to the year reduces the year’s real GDP by about 0.1 percent. Adding a nonwork day to a quarter would therefore reduce the quarter’s unadjusted real GDP by about 0.4 percent. Extrapolating from this finding, removing all of the working days from a quarter is 62 or 63 times this, or 25 percent. In other words, if seasonally-adjusted GDP for 2020-Q2 would have been $5.5 trillion at a quarterly rate (see Table), then changing all of that quarter’s working days to the functional equivalent of a weekend or holiday would reduce the quarter’s GDP to $4.2 trillion. Applying the same approach to 2020-Q1, with a lockdown occurring for one-eighth of the quarter, 2020-Q1 real GDP (in 2020-Q2 prices) would be $5.4 trillion. The quarter-over-quarter growth rate of seasonally-adjusted real GDP would, expressed at annual rates, therefore be -10 percent in Q1 and -63 percent in Q2.
Bottom line: Given these and other facts, while even negative 50 percent is an optimistic projection for the annualized growth rate of US GDP in 2020-Q2, (assuming nonessential businesses stay closed over that time), this large figure may understate the true effect, which could total nearly $10,000 per household per quarter.
- Physicians & Surgeons — Surge Clinician-Shifts (Per Week Per 100k)
Epidemiological models predict that COVID-19 will generate extraordinary demand for medical care, raising questions about whether the US healthcare system has sufficient capital (ventilators and ICU beds) and labor (doctors, nurses and other healthcare workers) to provide needed care. To gauge the surge capacity of the US healthcare workforce, the authors calculate how much additional care could be provided if clinicians increased their workloads to 60 hours per week. They use data from the 2015-2017 American Community Survey, which surveys 1% of the US population each year, and records workers’ occupation and weekly hours.
The table below shows national-level statistics, with a focus on three occupations: physicians, registered nurses, and respiratory therapists, who provide intubation and ventilation management for COVID-19 patients with breathing difficulties. The US has 237 physicians per 100,000 people, who work the equivalent of 4.3 12-hour shifts per week, and thus provide 1,022 clinician-shifts per 100,000 people per week. If physicians increased their capacity to 60 hours, or five 12-hour shifts, per week, they could provide an additional 163 clinician-shifts, or 16% more care. Registered nurses provide a baseline of 2,111 clinician-shifts per 100,000 people per week. Because they work fewer hours at baseline, they could increase their capacity by an additional 1,276 clinician-shifts per 100,000 people or 60% by working five shifts per week. Respiratory therapists’ surge capacity is proportionally similar.
Surge capacity varies substantially by region. Physician surge capacity, measured in clinician-shifts per 100,000 people per week, is nearly twice as large in the Northeast as the Midwest or Deep South. Surge capacity for registered nurses is highest in the Midwest, and lowest in the Southwest. Respiratory therapist surge capacity is highest in the Great Plains and the South. The Southwest has relative low surge capacity for all three occupations.
Some clinicians have the training to care for COVID-19 patients. Others could be cross-trained to provide this care. Even clinicians who are not appropriate for cross-training can fill in for coworkers who have been shifted to COVID-19 care, as could retired workers who have training and experience but have higher COVID-19 mortality risk. As some states have already started doing, easing licensing restrictions can give hospitals the flexibility to better cope with this unprecedented spike in demand.
 Ferguson, Neil M., et al. March 16, 2020. “Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand.” London: Imperial College COVID19 Response Team.
 The authors choose 60 hours because this is the average amount that physicians report working per week during the ages when they are in training. This training is notorious for requiring long hours, but these hours are apparently manageable for a period of months or a few years.
 The authors restrict their analysis to those working in hospitals and physicians’ offices, as these industries are most relevant for COVID-19 care.
 Data on additional occupations are shown in the Appendix.
 E.g., https://malegislature.gov/Bills/191/S2615 and http://www.op.nysed.gov/COVID-19Volunteers.html
- Share of Jobs That Can Be Done from Home
To evaluate the economic impact of “social distancing,” one must determine how many jobs can be performed at home, what share of total wages are paid to such jobs, and how the scope of working from home varies across cities and industries. By analyzing surveys about the nature of people’s jobs, the authors classified whether that work could be performed at home. The authors then merged these job classifications with information from the Bureau of Labor Statistics on the prevalence of each occupation in the aggregate, as well as in particular metropolitan areas and industries.
This analysis reveals that 37% of US jobs can plausibly be performed at home. Assuming all occupations involve the same hours of work, these jobs account for 46% of all wages (occupations that can be performed at home generally earn more). As the accompanying map indicates, there is significant variation across cities. For example, 40% or more of jobs in San Francisco, San Jose, and Washington, DC, can be performed at home, compared with fewer than 30% in Fort Myers, Grand Rapids, and Las Vegas. There are also large differences across industries. A large majority of jobs in finance, corporate management, and professional and scientific services can be performed at home, whereas very few jobs in agriculture, hotels, or restaurants can do so.
 Feasibility of working from home was based on two surveys from the Occupational Information Network (O*NET).
- What Share of Total US Employment is in Most Exposed Sectors?
A large number of businesses are mostly shut down for public health reasons, others are facing greatly diminished demand or are likely to shut down in the near future. Using data from the Bureau of Labor Statistics Current Employment statistics by detailed NAICS industry codes, we can measure how many people work in these most exposed businesses. Six of the most directly exposed sectors include: Restaurants and Bars, Travel and Transportation, Entertainment (e.g., casinos and amusement parks), Personal Services (e.g., dentists, daycare providers, barbers), other sensitive Retail (e.g., department stores and car dealers) and sensitive Manufacturing (e.g., aircraft and car manufacturing). In total, these sectors account for just over 20% of all US payroll employment, so shutdowns of these sectors on their own will lead to massive declines in employment. These will be offset partially by increased hiring in grocery stores, package delivery, etc., but this is unlikely to do much to dampen the blow.
This will likely get much worse if these shutdowns persist for multiple months and to the extent that they start to spill over substantially into other sectors like construction and broader manufacturing. Policy measures to reduce the depth and long-run effects of the recession should focus on 1) limiting the spread of the virus itself through direct health spending and allowing for effective social distancing, such as paid sick leave, expanded unemployment insurance and providing the tools for businesses with lots of in-person contact to idle; 2) providing liquidity so that households in shutdown industries can continue to shelter at home, eat, and not face devastating declines in their financial conditions. These policies will limit the long-run harm of the recession and also reduce the spillovers into less directly affected industries. Note that providing liquidity also helps with allowing for social distancing and the first policy goal.
Footnote: NAICS Classification: Restaurants and bars: 7223-7225. Travel and Transportation: 4811,4812, 4853, 4854, 4859, 4881,4883, 7211. Personal Services: 6212, 8121,8129. Entertainment: 7111, 7112, 7115, 7131, 7132, 7139. Other sensitive retail: 4411, 4412, 4421, 4422, 4481, 4482, 4483,4511,4512, 4522, 4531, 4532, 4539, 5322, 5323, 4243, 4413, 4543. Sensitive Manufacturing: 3352, 3361, 3362, 3363, 3364, 3366, 3371, 3372, 3379, 3399, 4231, 4232, 4239, 3132, 3141, 3149, 3152.
How can we understand today’s enormous increase in UI claims at the onset of the COVID-19 epidemic? Given how quickly the situation has moved we knew there would be a large increase in UI claims, whereas in a slower moving crisis, the weekly flows into UI slowly increase as the stock of UI claimants balloons. To put things in perspective we can go back to the Great Recession and accumulate UI claims in excess of what we would normally expect. The chart below shows that new UI claims in one week correspond to all new UI claims during the first six months of the Great Recession.
These statistics reflect public health policy aimed at slowing the spread of the disease. In terms of the labor market, if they also represent workers on temporary layoff, with their jobs kept intact and income support, we may see a V-shaped recovery. If, on the other hand, they represent workers that have now become truly unemployed, with their jobs terminated, and little income support, this will be a painful, slow, L-shaped recovery. As Ganong and Noel note elsewhere in these facts, UI claims may even undershoot the fraction of workers who would be eligible to claim.
- Growth in Industrial Value-Added (NBS), Truck Flows among Provincial Capital Cities
On January 23, the Chinese government locked down the city of Wuhan (Hubei Province). In subsequent days, similar measures were taken in other cities in Hubei and throughout China. This note offers some preliminary gauge on the effect of the measures taken to protect public health on economic activity in China. We will make use of three data sources. First, some official data on industrial output already exists. Second, we make use of data on trucking flows to measure the flow of goods across China. Third, Baidu Map data allow us to estimate the effect on services and worker movements within China.
We begin with official data provided by China’s National Bureau of Statistics (NBS). The most recent data (as of March 23, 2020) is from February 2020. Figure 1 shows that industrial value added fell by 4.3% and 25.9% in January and February of 2020 on a year-on-year basis. If the counterfactual growth in absence of the epidemic is 5.7%, the average growth in 2019, the slump would be even more dramatic.
An alternative data on industrial output is data on shipment of goods across Chinese cities. We have data from a private trucking company that provides logistical services to truck drivers. This company, G7, has real-time GPS data from two million trucks, accounting for about 10 percent of all trucks operating in China. We aggregated the movement of trucks in and out of a provincial capital by day. Figure 2 plots the daily truck flows between provincial capital cities, with the beginning day of the year normalized to one. The decline of truck flows before Wuhan lockdown captures the slowdown associated with the coming Chinese New Year. Strikingly, the truck data suggest that goods flows between Wuhan and the other provincial capital cities remained at a very low level and did not recover at all since the lockdown.
The next data we show are flows of people within and between cities. Here, we use indices of movements of people provided by Baidu. This data is based on “location-based services” (LBS) in Baidu Map. Figure 3 plots within-city travel intensity, with the beginning day of the year normalized to one. Panel A and B plot the data for 2019 and 2020, respectively. The red bar in Panel A marks the 2019 Chinese New Year. The black bar in Panel B marks Wuhan lockdown, which is two days before the 2020 Chinese New Year and exactly precedes the free fall of within-city travels in Hubei. The index dropped by more than half within a three-day window and remained low for six weeks, only to pick up recently until the mid-March. The indices outside Hubei were picking up more rapidly and have almost reached the level in early January.
The movement of people across Chinese cities was more severely affected, as shown in Figure 4. The travels to/from cities in Hubei were nearly frozen. The cross-city travels that do not involve Hubei cities also experienced sharp declines, though to a lesser extent than those involving Hubei cities. In mid-March, the cross-city travels outside Hubei have fully recovered to its early January level.
In sum, the economic impact of lockdown on China is large, severe, and perhaps still mounting despite various massive economic and financial policies that are rolled out by top authorities in Beijing in a timely fashion. China is facing a daunting challenge for its economic recovery at this point, especially because the deteriorating pandemic situation across the globe is bringing an almost complete halt to the export sector in China, and could make it difficult for Chinese firms to access critical inputs provided by firms outside of China.
 View related white paper, “Dealing with a Liquidity Crisis: Economic and Financial Policies in China during the Coronavirus Outbreak.”
- Flight to T-Bills/Cash, Dollar is King
When the market is calm, the term structure of the Treasury yield curve tends to be upward sloping, as investors expect to be paid more when lending in the longer-term. But on March 9, when the first market-wide halt was triggered by the coronavirus outbreak, the term structure was greatly flattened as investors responded to stock market turmoil by turning to long-term government bonds. During the second and third market-wide halts on March 12 and March 16, as the liquidity crisis was looming, investors started scrambling for cash, i.e., the government debt with the shortest maturity. As a result, short-term Treasury Bills (T-Bills) that can be quickly converted to cash became highly favored by investors, raising their prices relative to long-term treasuries and bending the entire yield curve upward sloping again. This flight to T-Bills also explains the recent striking fall of stocks, commodities, and long-term bonds in the same time.
The situation worsened even more on March 18 when the stock market halted for the fourth time in this sequel, strengthening the upward yield curve. However, the upward slope in this dire situation is driven by the surging demand of US currency from market participants–ranging from companies, funds, or sovereigns–potentially to pay off their US dollar denominated debts and other contractual obligations. This dramatic increase in demand for US currency is reflected in Figure 2, which plots the soaring dollar index (DXY) against other major currencies. Note, USD/JPY rises too, even though Japan has been widely appraised for its success of containing the virus during this time. This is behind the Federal Reserve’s recent aggressive expansion of its dollar swap lines with several major central banks.
 We have taken the 3-month OIS spread out from the entire yield curve to eliminate any mechanical level shift caused by the (expected or realized) federal funds rate movement on that day. (Indeed, the federal funds rate was cut on March 15.) Also, the upward sloping is not due to rising expectation of inflation; during this period the breakeven inflation rate (a market-based measure of expected inflation, the spread between nominal bonds and inflation-linked bonds say TIPS) goes down slightly.
- The Current Pandemic and Policy Responses are Driving Market Volatility
As the novel coronavirus (COVID-19) spread around the world, equities plummeted and market volatility rocketed upwards. In the United States, recent volatility levels rival or surpass those last seen in October 1987 and December 2008 and during the Great Depression, raising two key questions: 1) what is the role of COVID-19 developments in driving market volatility, and 2) how does this episode compare with historical pandemics, including the devastating Spanish Flu of 1918-20?
Employing automated and human readings of newspaper articles dating to 1985, the authors find no other infectious disease outbreak that had more than a minimal effect on US stock market volatility. Reviewing newspapers back to 1900, the authors find no contemporary newspaper account that attributes a large daily market move to pandemic-related developments, including the devastating Spanish Flu pandemic, which killed an estimated 2% of the world’s population. In striking contrast, news related to COVID-19 developments is overwhelmingly the dominant driver of large daily US stock market moves since February 24, 2020.
While the severity of COVID-19 explains some of the market’s volatile response, the authors find this answer incomplete, especially since similar—or worse—fatality rates 100 years ago had comparatively modest effects on markets. The authors offer three additional explanations:
- Information about pandemics is richer and is relayed much more rapidly today.
- The modern economy is more interconnected, including the commonplace nature of long-distance travel, geographically expansive supply chains, and the ubiquity of just-in-time inventory systems that are highly vulnerable to supply disruptions
- And behavioral and policy reactions meant to contain spread of the novel coronavirus, including adoption of social distancing, are more widespread and extensive than past efforts, and have a more potent effect on the economy.
- Approximate Overhead Costs by Industry for Private Firms
The graph displays an estimate of overhead costs ($1.16 trillion total) for all non-financial S-corporations based on aggregate data from tax returns. Overhead costs are meant to include required expenses for firms, like interest, rents, utilities, maintenance, and so on. They do not include payments to workers, nor profits for shareholders, nor new capital expenditures.
Three points deserve note. First, overhead costs are important for private firms (approximately 14% of total revenues or 38% of gross profits). Second, we can estimate such costs relatively easily using information from past tax returns, which points toward feasible policy solutions designed to help firms cover these costs quickly during the coronavirus crisis. Third, aggregate overhead costs are especially important in retail and wholesale trade. These industries have many small private firms likely to be hardest hit by the crisis.
Source data are aggregates from the SOI corporate sample for the tax year 2014, aged to 2018 using the growth of nominal GDP. The year 2018 is the latest year for which tax returns would be readily available to the IRS to implement a policy.
 S-corporations likely account for between 1/4 and 1/3 of all overhead among non-financial private business, which includes partnerships, sole proprietorships, and private C-corporations.
- Survey of Business Uncertainty (March 9 - 20, 2020)
While the effect of the COVID-19 virus on financial markets has been apparent for weeks—US equities fell 30% from February 21 to March 20—we are still months away from realizing the full economic effect. However, the recent Survey of Business Uncertainty (SBU) portends a sharp drop in business activity in 2020. Moreover, business pessimism grew from March 9 to March 20, while the survey was in the field.
When asked directly about the impact of coronavirus developments in mid March, firms see a 6.5 percent negative hit to their sales revenues in 2020. Comparing what firms say about their overall sales outlook in March to what they said in February yields a very similar drop in expected sales revenue. Further, firms’ uncertainty about their own sales growth over the next year rose 44 percent from February to March.
In partnership with Steven Davis of the University of Chicago Booth School of Business and Nicholas Bloom of Stanford University, the Federal Reserve Bank of Atlanta has created the Atlanta Fed/Chicago Booth/Stanford Survey of Business Uncertainty (SBU). This innovative panel survey measures the one-year-ahead expectations and uncertainties that firms have about their own employment, capital investment, and sales. The sample covers all regions of the U.S. economy, every industry sector except agriculture and government, and a broad range of firm sizes.
- Receipt of Unemployment Insurance by Unemployed Workers
Most unemployed workers in the United States do not usually receive unemployment insurance (UI). In 2019, only 1 in 4 unemployed workers received UI benefits, because of eligibility rules and barriers to program participation. In normal times, receipt of UI benefits requires: 1) proof that the worker was laid off because of changes in labor demand (“good cause”), 2) proof that the worker is searching for a job, and (3) a sufficient work history. In addition, there are usually several administrative hurdles that laid-off workers need to leap to claim benefits.
Although these requirements lead to low UI recipiency throughout the US, some states’ UI systems are particularly ill-equipped to address the coming increase in layoffs. In North Carolina, for example, only 1 in 10 unemployed workers receives UI benefits. However, no state is well-prepared. Even in the states that are doing relatively well, like Pennsylvania, fewer than 1 in 2 unemployed workers receive UI benefits.
- Change in Electricity Consumption in Italy Since February 21
With Americans largely self-isolating amid concerns about COVID-19, some of the hardest-hit areas are already seeing electricity demand begin to weaken. It is useful to review what has happened to power demand in Italy, which some say is about 11 days ahead of the US trajectory of the virus. Compiling regional grid data and adjusting for weather changes reveals that power demand has plunged in Italy since the middle of February.
On Friday, February 21, life was largely going about as normal in Italy. The following day, the Italian government began to institute quarantine measures. By Monday, power demand began to slow. Since a national lock-down on March 10, national power demand had fallen over 28% compared to demand just prior to the quarantine measures.
Power demand could be a real-time indicator of the more widespread impacts on the Italian economy. Also, what is happening in Italy could point to what the United States could expect in the coming weeks as states issue tighter restrictions on daily life. When there is a sharp shock to the economy, other indicators like employment may lag in reflecting the impact. This is because laying off workers is often seen as a last resort as companies start by taking other measures like ramping down production or adjusting maintenance schedules. Conversely, electricity demand shows the more immediate change and is a broad measure of economic activity. This was on display during the last recession in the United States. US power demand began to fall a month before the official start date of the recession according to the National Bureau of Economic Research—a date that was determined after an additional year of data had been collected. As policymakers today are considering which countermeasures may be in order to buffer the economic effects of coronavirus, a real-time indicator of the economy’s strength is of the utmost importance.
When trade costs are low, creative destruction among firms increases, jobs are reallocated accordingly, and productivity increases. Analyzing trade patterns between the United States and Canada before and after the Canada-US Free Trade Agreement (CUSFTA) of 1988, the authors describe key facts about the flow of jobs across firms, and how these flows are affected by trade policy. These facts can be distilled to two key points:
- Large job flows. The average job creation and destruction rate in manufacturing over five-year periods (from 1973-2012) is about 30 percent in Canada. The average job creation rate in US manufacturing is also about 30 percent (1973-2012), and the average job destruction rate in the US is about 5 percentage points higher. The large rates of job creation and destruction suggest that an important part of economics is when firms innovate on products of other firms. Innovating firms then gain jobs, while the firms whose products are taken over lose jobs.
- Trade is a big driver of creative destruction. This is particularly evident in Canada, which has a much smaller economy than the US and, likewise, shifts in trade policy have larger aggregate impacts. For example, Table 1 shows job creation and destruction rates in Canada before and after CUSTA. Job losses increased from about 25 percent to 32 percent, but job gains from exports doubled from about 9 percent to 18 percent. In the US, the job destruction rate increased about 6 percentage points after CUSTA (Table 2), but the US was also impacted by increased imports from China during that period, so these results cannot be attributed primarily to CUSTA. Also, all of the US job destruction was driven by large firms. Finally, productivity improves as trade grows and innovations occur.
One important implication of this research is that policymakers should consider the role of innovation—and the flow of cross-border ideas—when conducting trade policy.
- ChartFigure 1: The Evolution of Average Employment Relative to VC Funding Date
Note: The observations are relative to average employment (normalized to 1) for VC-funded startups in the year of first VC funding, t=0.
Not every new business venture hits the big time; indeed, most begin small and stay small through their lifecycle. Critically, for the aggregate economy, most new firms also do not make breakthrough innovations that spur productivity growth beyond their business. What sets the game-changers apart from the pack? A number of factors can put a new venture on a path to growth, including the presence of a patent or a trademark, R&D activity, and initial firm size.
However, this research adds another factor to the mix: It turns out that venture capital (VC) backing during the early stages of a start-up is a key ingredient of firm success. More than that, the authors find that such firms are also key contributors to aggregate innovation and productivity growth; that is, these individual firms introduce technological advances that not only benefit the firms’ bottom line, but that also disperse into the broader economy.ChartFigure 2: The Evolution of the Average Quality-Adjusted Patent Stock Relative to VC Funding Date
Note: The observations are relative to the average patent stock (normalized to 1) for VC-funded firms in the year of VC funding, t=0.
Following are the key empirical observations of this research:
- Like all start-ups, VC-backed firms are subject to the slings and arrows confronting fledgling businesses—many fail and many remain small. However, VC-backed firms are much more likely to grow and attain “superstar” status than non-VC-backed firms. Further, such firms are increasingly dominating markets within their industries.
- The second and third points emphasize that the relationship between a VC and an entrepreneur matters—a lot. And those relationships do not begin randomly. It turns out that VCs select the most promising startups to support. The authors’ empirical analysis shows strong evidence of assortative matching between entrepreneurs and financiers (including banks and others); those firms with promising innovation and growth prospects are more typically funded by VCs.
- Further, not all VCs are created equal: those with more experience and with higher funding capabilities tend to ensure greater success for start-ups. Again, those start-ups backed by more experienced VCs engage, on average, in more innovative techniques. Moreover, as measured by patent citations, these more innovative technologies have the largest positive impact on the rest of the economy.
The authors’ empirical analysis revealed the following results:
- Employment at VC-funded firms grows, on average, by about 475 percent over the time the VC is involved with the firm, compared with employment growth of about 230 percent for non-VC-funded firms.
- Similarly, VC-funded firms experience much higher growth in patent stock: VC-funded firms’ patent stock grows by about 1,100 percent vs. about 440 percent for non-VC-funded firms.
The US looks very different now from 1965, in ways that make having a single Medicare program for everyone less efficient and less financially sustainable.
- First, medical technology has advanced by leaps, improving health and extending lives, but at mounting cost. For example, a heart attack that would have killed a patient in 1965 can now be successfully treated, but with an average hospital stay costing $20,000. While rich and poor could thus afford similar health care in 1965 because treatment was simpler and less expensive, the cost of providing everyone with all available treatment has skyrocketed as medical technology has evolved.
- Second, while top tax rates have fallen since the 1960s, average overall marginal tax rates have increased. These higher marginal tax rates come at a cost that goes beyond the actual revenues raised: they change the decisions and investments made throughout the economy, exerting a drag on economic activity (dubbed “deadweight loss”). This means the economic toll of financing new health benefits has become much larger.
- Finally, income inequality has risen substantially, and people with different incomes may want to devote a different share of resources to health care. Higher income households might opt for a generous, comprehensive benefit—but that would eat up an enormous share of overall resources available to lower income households. This raises the social cost of having a single, uniform plan. Forcing a generous plan on low-income households would make them worse off than a combination of a less generous plan with more generous other social insurance programs, while forcing a more basic plan on high-income households would prevent them from spending resources on health care that they value, and might in fact slow the development of new, life-saving medical technologies.
With these trends likely to continue, covering everyone with a uniform generous insurance plan will be increasingly challenging. An alternative would be to provide a more basic benefit to everyone—one with good financial protection and coverage for services with substantial health benefits, but with limited or no coverage for expensive, lower-value services. Higher income people could pay to add on coverage of those lower-value services.
In a healthy market economy, new businesses form every year and others fail. This business dynamism ensures that resources, including labor, are allocated to their most efficient use. Since 1980, though, and especially since 2000, business dynamism in the US has been declining.
There is currently a heated debate about the impact of market concentration and declining business dynamism on the US economy, and whether the two are related. This research finds that the key consideration in resolving this question is the degree of competition within markets, and the relative position of leading and following firms. Of the various factors that shape those competitive relationships and influence the level of business dynamism, the one with the greatest impact is knowledge diffusion, or the degree to which following firms learn from leaders. This phenomenon accounts for at least half of decrease in business dynamism.
While the authors refrain from offering explicit policy guidance and make a case for further research, they do discuss the strategic use of patents since 2000, which may be a restraining influence on knowledge diffusion. For example, in 1980, 35 percent of patents were produced by the largest 1 percent of the firms, by the 2010s it was 60 percent. Also, the secondary market for patents has evolved to favor large firms, with the top 1 percent buying 65 percent of patents in the resale market, as opposed to 30 percent in 1980. Some of those transactions fit the description of killer acquisitions, whereby large firms buy a patent not to incorporate its new technology, but to put the patent on a shelf, thus squelching the patent’s competitive benefits. If there is a role for policymakers in addressing the decline in business dynamism, it likely does not entail traditional issues like tax rates and subsidies, but necessitates a close examination of the secondary market for patents.
- ChartFigure 1: The Spatial Distribution of Employment at Foreign Firms
Notes: The two figures display spatial variation in employment at foreign-owned firms observed in the tax data for the workers sample of interest. In the first figure, the share of workers employed at foreign-owned firms is plotted in 2001 for each commuting zone. In the second figure, changes from 2001 to 2015 in the share of employment at foreign-owned firms are plotted by commuting zone.
Local governments often try to lure foreign multinationals to their cities and counties. Employing a novel dataset, the authors investigate the direct effects that foreign multinationals have on their own employees, as well as the indirect effects that these firms have on local businesses and their employees.
Direct Effects: In total, the amount of wages paid by foreign multinationals is 25 percent greater than domestic firms in the same industry and region. However, this difference may reflect that foreign multinationals tend to hire high-skilled workers. The authors study workers who move across firms in their data to show that the same worker earns 7% more in wages when moving from a domestic firm to a foreign multinational. In the aggregate, this 7% raise is not trivial— roughly $34 billion annually in US wages, or about 0.6% of all private sector wages, are paid as a premium by foreign multinationals.
Indirect Effects: What happens to domestic workers and firms when foreign multinationals enter a commuting zone? The broad answer is that employment and wages increase, and there is overall value added (sales minus the cost of goods sold) for private firms. These positive effects are highest for firms in the tradable goods sector and among those domestic firms with more than 100 employees.
The accompanying maps in this Economic Fact show the distribution of foreign multinationals in the US in 2001 and where their employment grew over time (Figure 1). An accompanying figure presents the wage gains when moving from the average domestic firm to the average foreign multinational by country of foreign ownership (Figure 2).
Since 1970, lower-income households have tended to live downtown more than middle-income households, which typically live in the suburbs. Also, on average, as households gain more income, they are more inclined to live downtown. This has long been the case and is illustrated by the U-shaped curve in Figure 1. However, as the blue line in Figure 1 reveals, something new has occurred over time that has made that U shape more pronounced: There are more wealthy households, and they are moving downtown.
What are the effects of this phenomenon? To answer that question, the authors built a model that explains spatial sorting (or where people choose to live) within a city. This model has two key features: households make residential choices based on their income, and neighborhoods change as those households rearrange themselves. For example, wealthier households not only make choices based on public amenities (like parks and schools), but also proximity to private amenities (like restaurants and entertainment venues).ChartFigure 1: Downtown Residential Income Propensity
Note: Table uses Census data on family income
If wealthier people move to a particular neighborhood in growing numbers, the quality of both types of amenities are likely to improve—public amenities because of increased property tax revenue, and private amenities because households have more to spend. Regarding private amenities, developers in the authors’ model build neighborhoods based on household demands, which results in differentiated neighborhoods from which households can choose to live. Households weigh the cost of living (housing, taxes, commuting, for example), with the benefits, including public and private amenities. If they value high-end restaurants and proximity to the opera hall, and they can afford the relatively higher cost of living, then households in this model will choose to live in those neighborhoods. Likewise, households that value such amenities differently will choose to live in other neighborhoods.
When it comes to the question of whether and how rising incomes of the rich can explain neighborhood change in downtowns between 1990 and 2014, the model offers a clear answer: The rising rich are primarily responsible for the changing sorting patterns by income. An influx of high-income households increases the relative demand for high-quality neighborhoods, which puts upward pressure on housing prices. This upward pressure on housing prices affects other downtown neighborhoods, presenting poorer households (who are mostly renters) with a choice: Stay in their neighborhoods and pay higher rent for amenities that they don’t necessarily value, or move to the suburbs. The rich not only get richer, but there are more of them and they enjoy a better lifestyle, while the poor, who find their income stretched by rising housing costs or who are forced to move, experience a drop in well-being.
The promise of the American dream is about the possibility of upward mobility; namely, that anyone, regardless of where they were born and what class they were born into, can achieve success on their own terms. Together with the recent dramatic rise of income inequality, US cities have experienced a steady increase in residential segregation by income that challenges this ideal.
This research focuses on the interconnection between inequality and residential segregation and the work of Raj Chetty and Nathaniel Hendren, who have estimated the effects of exposures to better neighborhoods on children’s future earnings. Fogli and Guerrieri use these micro estimates to study the macroeconomic implications of these neighborhood effects. They show that residential segregation significantly amplifies the increase in inequality in an economy where technological progress increases the skill premium.ChartFigure 1: Inequality and Segregation across US Metros
To determine this result, the authors calibrate their model using salient features of the US economy in 1980 and the micro estimates of neighborhood exposure effects. Then they study the response of the economy to a “skill-biased technical change shock,” that is, a change in technology that increases the productivity of high-skilled jobs, and, hence, increases the earnings of the more and better educated workers. This is what happened in the US economy during the 1980s, and is considered one of the primary reasons for the widening income gap in following decades.ChartFigure 2: Inequality—Counterfactual with Random Relocation
Note: This figure compares the response of inequality to the skill premium shock in the baseline model (yellow) to the response of the economy when families are randomly re-located between the two neighborhoods every period after the shock (light yellow). The figure shows that segregation contributes to 18% of the increase in inequality one period after the shock, that is, between 1980 and 1990, and to 28% of the increase in inequality over the whole period between 1980 and 2010.
The main contribution of this research is to quantify how much of the subsequent increase in inequality is due to the presence of neighborhood effects and the resulting residential segregation. To this end, the authors compare the response of the benchmark model to the response that would arise if families were randomly re-located across neighborhoods and the segregation channel was muted. The authors show sizeable results: segregation contributed to 28 percent of the increase in inequality in response to skill-biased technical changes between 1980 and 2010. The more that skill premia drive disparity in wages, the more certain neighborhoods will continue to gain an advantage, as children in those neighborhoods are better positioned to learn, adapt, and earn more than children in poorer neighborhoods. Skill premia act like a widening wedge, driving future generations of rich and poor children further apart.
- ChartFigure 1: Extensive vs. Intensive Margin Growth of Top Firms
Note: The left panel shows non-parametric regression of ∆ log # of MSAs, Counties or Establishments of top 10% firms relative to all firms on ∆ log employment share of top 10% firms, both from 1977-2013. The thin solid line is a 45° line. The right panel shows non-parametric regression of ∆ log employment per MSA, County or Establishment of top 10% firms relative to all firms on ∆ log employment share of top 10% firms, both from 1977-2013.
In recent decades, spurred in part by developments in information and communication technologies (ICT), along with important advances in management practices, the efficiencies long present in typical manufacturing sectors have emerged within wholesale, retail, and service (or non-traded) industries. This phenomenon has driven the development of efficiencies across many nontraded sectors. Firms have incorporated new methods that have allowed them to deliver similar products across space. However, as this research reveals, while some of these firms have grown to dominate particular sectors in terms of employment, their share of employment in the overall economy has remained stable.
The authors reveal the five following facts:
- Rising concentration within sectors is only evident among top firms in three industries: services, wholesale, and retail, where employment share among the top 14 percent of firms increased from 67% to 73% between 1977 and 2013, and not in such sectors as manufacturing, where concentration has actually decreased.
- Concentration is driven by expansion into new local markets, and leads to decreasing employment per establishment among top firms.
- While employment per establishment may fall, total employment rises substantially in industries with rising concentration, even among smaller firms. Technological and managerial advances, in other words, are not preventing competition but are rather intensifying its effects.
- This new industrial revolution has driven increasing specialization among the top firms in non-traded sectors, meaning that while these firms are focusing on certain industries, they are also leaving others.
- Finally, while the growth of such firms is increasing concentration in terms of employment within sectors, it is not resulting in similar concentration across the aggregate economy.
This last fact is key, especially given the recent focus and concern about the rise of so-called “superstar” firms. Many fear that these firms, which have achieved relative dominance in certain sectors, also have an outsized influence on the total economy. However, this work rebuts that view. Essentially, while this growth has led to increased concentration within certain sectors, there is no change in concentration among the broader economy’s top firms.
To address the questions of how to better inform potential aid recipients of program benefits, and whether assistance programs are effective, the authors examined the impact of various interventions on the number and type of eligible elderly Pennsylvania individuals who enroll in SNAP, the only social safety net program that is virtually universally available to low-income households.
The authors randomly placed 60,000 individuals aged 60 and over in three equally sized groups: an information only treatment, an information plus assistance treatment, and a status quo control group, to find the following:
1. Information alone increases enrollment, while information plus assistance is even more successful, but at a higher per enrollee cost. The information only group applied at a rate of 11 percentage points (at $20 per enrollee), with the information plus assistance treatment at 18 percentage points ($60), while the status quo control group was at just 6 percentage points.
2. Information decreases targeting. Marginal applicants and enrollees from either intervention are less needy than the average enrollees in the control group. The average monthly SNAP benefit (which declines with net income) is 20% to 30% lower among enrollees in either intervention arm relative to enrollees in the control group. Additionally, relative to the control group, applicants and enrollees in either intervention group are in better health, more likely white, and more likely have English as their primary language. Importantly, the 70 percent of individuals who did not respond to the interventions and remained largely unenrolled were likely more needy, suggesting the necessity for new and differently targeted interventions.
For policymakers, the main lesson is simple yet profound: information matters. Individuals’ willingness to apply for benefit programs depends in large part on whether they have accurate beliefs about expected benefits; also, different types of people may have varying sets of misperceptions. Getting information to possible recipients increases program take-up significantly, but information plus assistance with applications is even more effective. While these results are reflective of intervention programs for SNAP recipients among elderly Pennsylvania, they likely hold for other programs and among other possible recipients throughout the country.
Following the late-2017 announcement of tariffs on all washers imported to the United States, prices increased by about 12 percent in the first half of 2018 compared to a control group of other appliances. In addition, prices for dryers—often purchased in tandem with washing machines—also rose by about 12 percent, even though dryers were not subject to a tariff. On the one hand, these price increases were unsurprising given the tariff announcement. On the other hand, washers had been the subject of multiple import restrictions since 2012 and the price of this ubiquitous household appliance had actually declined over the ensuing years.
The authors’ careful analysis of the washer and dryer markets since 2012, including descriptions of “country-hopping” by manufacturers to avoid tariffs, offers insights for other sectors. Tariffs increase the cost of doing business, which often leads to increased prices for intermediate goods (those used in production) and final goods (those purchased by consumers and businesses). However, tracing the impact of a tariff through the production and delivery of a particular good is difficult; the effort is often inhibited by incomplete or private data that companies hold close. The case of washing machines, though, offers a clear view on the impact of global tariffs for a particular product: consumers are the losers. Indeed, as this research reveals, complementary goods—in this case, dryers—can also be affected. However, when single-product tariffs are applied to individual countries, production may shift to another country and could actually lower production costs and, thus, prices for consumers.
More than 6 percent of working age adults receive SSDI or SSI disability payments; that aggregate number has grown steadily over the last 30 years, roughly tripling to 10 million individuals.1 Given the size and growth of this demographic, it is imperative that policymakers understand the impact of disability programs and the degree to which they influence recipients’ financial standing and quality of life. For example, the authors cite anecdotal evidence that shows how some landlords prefer SSDI/SSI recipients because of their steady source of income, and how such income proves more reliable for some recipients than most available jobs.
But anecdotes and theoretical assumptions are not enough to assess whether these programs are actually operating as intended. To address this evidence gap, the authors constructed what they believe is the first quasi-experimental study of the effects of US disability programs on outcomes that look beyond labor supply and mortality data. The authors built a new dataset that links administrative records from the SSDI and SSI programs to records on bankruptcy, foreclosure, eviction, home purchases, and home sales. In doing so, they present a first look at recipients’ financial well-being. These disruptive financial events occur irregularly but they have an outsized negative impact on recipients’ financial status and give key insights into fluctuations in recipients’ consumption.
Analysis of their dataset reveals three key facts:
- Applicants for disability programs experience bankruptcy, foreclosure, and eviction at rates higher than the general population. From this fact, the authors surmise that applicants likely experience higher rates of financial distress than others.
- Adverse financial events increase in the time leading to the application date, where they peak in occurrence. This finding suggests that applicants apply for disability benefits when they are in a state of financial distress.
- Relatedly, negative financial events occur less frequently for those who apply for benefits, even if they are rejected for disability payments, suggesting that such applicants find other means to address their financial needs.
The second and third facts show the importance of application dates and what they reveal about the state of financial distress faced by applicants. What are the causal effects of disability application on the financial outcomes of recipients? According to the authors’ analysis of the data, applicants who are allowed into the program are 30 percent less likely to experience bankruptcy over the following three years, 30 percent less likely to experience home foreclosure, and 20 percent are less likely to have to sell their home. Finally, as further evidence for the positive effects of disability application, the authors reveal that allowance into disability programs results in a 20 percent increase in home purchases.
The Great Recession of 2007-09 raised several issues about the relationship of housing markets to economic activity. One issue concerns the impact of housing prices on the development of new and young firms. Another involves how housing market ups and downs affect local economies. Employment at young US firms (less than 60 months since first paid employee) has declined steadily since 1987, when it stood at 17.9 percent, plunging to just 9.1 percent in 2014.
While their activity is consistently marked by strong cyclical fluctuations, young firms experienced an especially sharp contraction during the Great Recession and a slow recovery afterwards. What is the role of housing market conditions—boom or bust—in shaping the fortunes of young firms? What is the role of credit markets? How are labor markets affected? This new research finds that the great housing bust after 2006 largely drove an historic collapse in the employment shares of young firms.
Housing cycles do not affect all MSAs equally, and the authors use this insight to isolate locally exogenous shifts in housing prices. They then estimate and quantify the effects of housing price swings on young firms and local economies. The authors conclude that the great housing bust after 2006 largely drove the historic collapse in the young firm share of employment during the Great Recession. The pullback in bank lending to younger firms played a secondary role.
The authors’ rich dataset spans a long time period, which is especially useful when estimating the impact of bank lending on young firms. Banks are not all the same. They serve different markets, have different lending practices when it comes to small and young businesses, and are hit differently by financial crises and national business cycles. While almost all national banks retrenched their lending practices in response to the financial crisis of 2007-09, some were in better shape than others and better able to weather the storm and participate in the nascent recovery.
MSAs served by national banks that were particularly hard hit by the crisis had bigger drops in loan volume, and this reduction in credit redounded in their lending to young businesses. Indeed, the authors find that young firms suffer more than the average small business when banks scale back lending to small firms. Similarly, the authors find that the negative effects of falling housing prices are felt more strongly by young rather than small businesses.
This research also offers insights into implications for the employees of those businesses. That young firms tend to hire younger employees is perhaps unsurprising, but the authors also find that young firms tend to hire less-educated workers, as shown in Table 1. As an example, in 2010 young firms employed 10 percent of female workers who did not finish high school but only 7 percent of female workers with a college degree. As a result, the fortunes of young firms have an outsized impact on younger and less-educated workers. Thus, the housing bust and financial crisis hurt younger and less-educated workers through their particular effects on the fortunes of young firms in addition to their broader effects on the overall level of economic activity.
- ChartFigure 1: Relative Wages
Motherhood is the primary cause of the gender pay gap. This gap in pay reflects a gap in productivity between men and women with children. This productivity gap explains about two-thirds of the wage gap between men and women. However, women with no children, especially younger women, outperform men yet still earn lower wages
Researchers have long studied the gender pay gap, attributing this divergence to such factors as educational and career choices by women, psychological differences between men and women regarding risk and reward, demand for work flexibility, and, of course, discrimination, among other issues.ChartFigure 2: Relative Productivity
However, this research reveals that two-thirds of the pay gap can be explained by a productivity gap between men and women that is driven by motherhood. About 8 percentage points of the 12 percent residual pay gap between men and women can be explained by lower workplace productivity of mothers. Prior to motherhood, women are actually more productive than men and their productivity climbs again as their children age to equal that of men. However, for women who choose to have children, the fall-off in workplace production is enough to impact their pay and, possibly, the earnings of younger women whom employers assume will have children.
Researchers have come to call the decrease in wages for mothers a “child penalty.” The contribution of this work is to determine how much of that penalty is explained by productivity differences in the workplace. This productivity difference may arise from differences in the effort, extra (undocumented) hours worked, or effectiveness of men relative to women. While on average, the pay gap is quite close to the productivity gap, this is not true over all of the lifecycle. In particular, women without children are estimated to be as productive—if not more productive—than men without children, but they are still paid less than these men. Mothers, on the other hand, are substantially less productive than fathers and are paid commensurate with this productivity gap.
Since roughly 1955, women born every year in America were more likely to earn a college degree than men. At first the gap between men and women was narrow, but by 1970 it had widened considerably and has continued to expand over time. Indeed, the percent of women attaining a college degree continued to increase through the 1985 birth cohort while the percent of men holding a degree has actually declined since 1970. (See Figure 1.) What this means is that of all the women born in America in 1985, roughly 40 percent hold college degrees today, while just under 30 percent of men born in 1985 have a college degree.
Learning this, one might naively assume that the pay gap between men and women has closed and that, perhaps, women might be earning even more than men given their relative educational success. Of course, that is not the case, and Figure 2 shows that while the pay gap has narrowed since 1950, the downward trend in that gap has largely stalled since 1970. Women born in 1985 can still expect to earn upwards of 10 percent less than their male counterparts, regardless of how much schooling they have attained.
These facts, in particular the underrepresentation of women in the upper part of the earnings distribution, describe what is known as the glass ceiling, and Bertrand reviews the extensive literature surrounding the glass ceiling, stressing that an examination of the phenomenon must extend beyond discrimination and sexism to include other quantitative factors such as education, psychological attributes, work flexibility, childcare, and nonmarket work. She also reviews policies meant to help crack the glass ceiling and to get those lines in Figure 2 to start moving toward zero.
What are the policy options that might crack, if not break, the glass ceiling?
Family-friendly policies. Such work-place policies that include, among others, longer and paid maternity leaves, optional part-time or shorter work hours, and the opportunity to work remotely, address the demand for greater flexibility, but they don’t help address earnings gaps as long as they are negatively priced and as long as they are used mainly by women.
Gender-neutralizing childcare. Sweden, Norway, and Quebec have introduced dedicated paternity leave into their parental leave policies, meaning that the time is lost if not taken by the father. Known as “daddy quotas” or “daddy months,” these policies address a core problem for women who otherwise take a long absence from work and whose career and pay suffer.
Quotas. Following the lead of establishing quotas in political representation, some European countries have introduced quotas into corporate leadership policies. Norway, for example, in 2003 mandated 40 percent representation by women on the boards of public limited liability companies. Seven Eurozone countries followed suit, and in 2013 the European parliament approved a draft law that would require 40 percent female board members in about 5,000 listed companies in the European Union by 2020.
In December 2017 the unemployment rate was 4.1 percent, far below its peak of 10 percent in October 2009 in the depths of the Great Recession, and nearly equaling the 3.9 percent in December 2000. From this reading of the data, the labor market had made tremendous gains to return to its pre-crisis strength. However, those headline unemployment numbers mask a precipitous decline in employment among prime-age working men linked to the decline in manufacturing, with negative effects that extend beyond the health of labor markets to the well-being of communities and their citizens.
Between 2000 and 2017, employment rates for men aged 21 to 55 fell by 4.6 percentage points, and hours worked per year fell by over 180 hours (employment effects for women are also negative but less dramatic). These declines in employment began prior to the Great Recession while the economy was growing, and only worsened after 2007.
To put this decrease in perspective, the secular (or long-term) decline in annual hours worked for prime-age men from 2000 to 2017 is as large as the cyclical decline in annual hours worked during the 1982 recession. In other words, while the economy cycled through ups and downs between 2000 and 2017, prime-age working men endured a sort of shadow downturn, a 17-year decline in employment.
Using a variety of data sources and empirical approaches, the authors reveal the connection between this decrease in hours worked and the decline in manufacturing. Perhaps most sobering is the authors’ conclusion that those manufacturing jobs are not coming back. The increased pace of decline in manufacturing employment since 2000—when output actually increased by about 5 percent—reveals that improvements in productivity are driving the decline in employment. Fewer workers are needed to produce more, and this won’t change. Therefore, efforts to rescue jobs through trade policy are misdirected, the authors’ show.
Beyond the labor market, the authors find further negative effects stemming from the decline in manufacturing employment. The authors’ novel research supports the emerging view that labor market conditions can impact different dimensions of health: In this case, loss of manufacturing jobs are associated with higher rates of prescription opioid abuse and overdose deaths. Further, those negative social effects can prevent the economic recovery of these regions as possible employers may be reluctant to locate where a large number of potential workers frequently fail drug tests.
Finally, the authors investigate why these sectoral changes seem so intractable. Industries have evolved for decades and workers have either moved, taken new jobs or otherwise adapted. However, many workers today in these communities seem trapped in place, opting to drop out of the workforce and otherwise make ends meet.