Modeling Financial Sector Linkages to the Macroeconomy

A meeting of the institute's Macro Financial Modeling (MFM) Working Group

September 13–14, 2012

(All day)

Kimmel Center, New York University

The second meeting of the Macro Financial Modeling (MFM) Working Group gathered 55 scholars in New York for close review and discussion of research that examines financial sector shocks to the economy.

The goal, according to project co-director Lars Peter Hansen, was “to think hard and critically about incorporating financial frictions into macroeconomic models and identify productive avenues for future exploration.”

 

Empirical Evidence Connecting Financial Markets and Macroeconomic Performance

 

Some work suggests that credit spreads are predictive of economic distress, because they reflect disruption in credit supplies, either through the quality of borrowers’ balance sheets or deterioration in the health of financial intermediaries. Simon Gilchrist of Boston University examined the evidence relating credit spreads to economic activity. In work with Egon Zakrajšek, he constructed a credit spread and broke it down into components to identify the source of predictive power. They identified a component they called the excess bond premium (EBP) that represents the price of default risk. The EBP offered large gains in power to predict GDP growth, payroll, and unemployment, at least over medium-term horizons.

Mark Watson of Princeton presented related work that addressed two questions: How was this recent Great Recession qualitatively different from others? What were the important shocks that contributed to it? Watson summarized efforts to use a “dynamic factor model” fitted to a large cross section of time-series data to find answers to these questions. He found that financial and uncertainty shocks were the main cause.

Using a substantially different econometric approach, Frank Schorfheide of the University of Pennsylvania along with Marco del Negro used dynamic stochastic general equilibrium (DSGE) models to identify the “structural” shocks needed to explain macroeconomic time series data seen during the recent recession.

In discussion, Nina Boyarchenko of the Federal Reserve of New York and Nobuhiro Kiyotaki speculated on what economic models might best explain the time series evidence in ways that would enhance our interpretation of the evidence. Lars Peter Hansen, Christopher Sims, and others pushed for a better understanding of the “shocks” that were identified as drivers of the current recession.

 

Contingent Claims Analysis Applied to the Financial and Government Sectors

 

One reason traditional risk models do not adequately capture systemic risks is that they are based mostly on accounting flows and are therefore backward-looking, while risk assessment is inherently forward-looking. Contingent claims analysis (CCA), an approach presented by Dale Gray of the International Monetary Fund, addresses this weakness by incorporating liabilities and future obligations using market data with the associated adjustments for risk.

In discussion, participants expressed support for the basic idea but stressed the need for a better understanding and assessment of the inputs into this analysis. They also wondered how far such an approach can go to conduct policy analysis without a better understanding of the determinants of risk prices.

 

Comparing Systemic Risk Measures

 

With no fewer than 31 methods proposed to measure systemic risk, it’s unclear which methods work best under what circumstances and why. Much of Friday’s discussion focused on that question.

Bryan Kelly of the University of Chicago Booth School of Business presented work inspired by the first MFM meeting that attempted to understand how the many different models for analyzing systemic risk might work collectively. With Stefano Giglio and Xiao Qiao, he compiled more than 20 available measures and applied them to data going back as far as 1926. “The only thing that is clear is that they all jump in 2008, during the financial crisis, but that’s what they were constructed to look at,” Kelly said.

Kelly categorized and then compared the various models by the key factor they analyze: volatility, comovement, contagion, and illiquidity. His team found that covariation and volatility were driving the most highly correlated measures. They used partial multiple quantile regression to examine whether there was information in the collective measures that couldn’t be obtained through any single measure. In discussion, several participants noted that there are different causes for macroeconomic disturbances, and they require different methods for accurate analysis.

Future work in this area might expand the analysis from asset prices to also include commodities and currency fluctuations and to formally split some of the measures into groups depending on the more precise aims of the measures.

Kelly and MFM project codirector Andrew Lo of MIT both pointed out that the lack of a clear definition of systemic risk explains the profusion of different approaches to its measurement. They both referenced a recent essay by Lars Hansen, “Challenges in Identifying and Measuring Systemic Risk,” as a valuable overview on this point.

Lo and graduate student Amy Zhou ran a comparison of the 31 measures to see which best predicted financial events generally viewed as systemic. Measures like marginal expected shortfall were fairly predictive, but evidence of stress in hedge funds was more strongly correlated. Consumer credit markets also show early signs of a future credit crunch.

One participant pointed out that with 31 systemic risk measures proposed and thousands of papers published so far, “We should all pull together on this. There is plenty of work to go around.”

One specific model where there is an opportunity for expert macroeconomists’ input is the Complex Research Initiative for Systemic Instabilities (CRISIS), outlined by Doyne Farmer of Oxford University. Eleven groups from seven nations are working on an ambitious computer simulation of a system of heterogeneous interacting agents that could be used to perform conditional forecasts and policy analysis in any state of the economy.

Robert Engle of NYU described a macro-financial approach called SRISK that tries to pin down the macroeconomic impact of a crisis based on firms’ available capital. SRISK is a computed estimate of the amount of capital a financial institution would need to raise in order to function normally if a financial crisis occurs.

Constructed from a firm’s size, leverage ratio, and risk, the SRISK measure predicts econometrically how much equity would fall in a crisis and how much capital would need to be raised to compensate for the drop in valuation. In a financial crisis, all firms will need to raise capital simultaneously and credit will dry up, leaving taxpayers as the only source of capital. “The bigger the SRISK, the bigger the threat to macro stability is,” Engle noted.

Anil Kashyap of the University of Chicago Booth School of Business pointed out concerns with risk measures based chiefly on asset prices, which can be misleading. For example, CDS spreads gave little sign that Lehman and Bear Stearns were near collapse. Conversely, there are clear examples of equity prices collapsing, as in the 1987 market crash and the Long Term Capital Management episode, where there was no major effect on the macroeconomy, Kashyap noted.

In an executive session, the group determined priorities for future work, agreed to expand dissertation support, and set the agenda for the next meeting to be held May 2-3, 2013 in Chicago.

September 13, 2012 (All day) September 14, 2012 (All day)