This research project is supported by a generous grant from the Alfred P. Sloan Foundation, CME Group Foundation, and Fidelity Management & Research Company.

All sessions took place in Conference Rm-1NE-08S-A-Multi-Purpose-Room, 8th Floor.

 

Agenda

SESSION I: New Approaches to Systemic Risk Measures

The Great Recession made it very clear that financial institutions are linked through many parts of their balance sheets. This web of linkages between institutions played a key role in exposing banks and broker-dealers to fundamental risks, as well as propagating shocks that may have arisen only at only one institution.

Along these lines, work by Rama Cont and Eric Schaaning focuses on understanding the amplification channel that affects bank balance sheets, when assets are liquidated in a fire sale by another institution. The focus of the paper is novel, as much research have examined the financial network amplification when either an individual firm has an asset de valuation shock or the same firm responds to a apparent (or potential) bank run. In this sense, Cont and Schaaning focus on the feedback effects that result from other institutions’ activities.

The work is particularly useful when we consider macroprudential policy. For example when thinking about designing central bank stress tests, policymakers should not only account for how banks respond to their own stress but further measure how institutions react to one another.

To measure these effects, the authors design a network model where institutions hold more liquid and less liquid assets. When an institution in the network suffers an asset devaluation shock, it forces the original bank to de-lever by selling either type of asset, to obey leverage limits. Due to this fire-sale, prices drop further, which then affects neighboring banks, causing them to also de-lever. This spiraling process, which is bounded in the model, generates an amplification mechanism that is both realistic and new to the literature.

While the research is groundbreaking, open questions remain. The authors mainly consider shocks to asset valuations (eg. market liquidity). What if there are shocks to the funding side of the balance sheet (eg. funding liquidity)? Furthermore, a key building block of the model is the sensitivity of asset valuations to fire-sales. Empirically, however, how can we properly measure this elasticity without more granular data?

The above paper uses a model-based approach to understand the balance sheet-based linkages of firms. Other research utilizes an empirically driven, returns-based approach. Gerard Hoberg presented work with Kathleen Hanley that focuses on the key drivers of stock return covariances across financial institutions.

In particular, they examine systemic risks that are found in 10K filings. Using advanced textual analysis tools to parse these documents, the authors first identify 18 categories of risks and provide a category-related score for each financial institution. When a pair of institutions has elevated scores with respect to a category, this suggests that both firms are jointly exposed to a common underlying risk.
Hanley and Hoberg then examine whether the commonality of these risks, across firms, manifests itself into the covariance of stock returns, while accounting for other characteristics of these firms.

The research is particularly novel on two fronts: first, it is one of the first to parse and interpret the risks found in financial disclosure forms and second, it makes advancements in textual analysis to classify firm risks into multiple semantic categories. A key finding of the paper is that the percentage of return covariance, driven by common underlying risks, is both a significant and forward-looking risk measure for financial returns.

Through the exercise, other questions arise. For example, how can we interpret the output of the themes? Many of the themes are connected and are not orthogonal to one another. Also do these themes change in their interpretation over time by managers running these institutions? Interest rate risk, for example, might be interpreted differently today relative to the early 2000’s as the sensitivity of
fixed income markets to central bank policy has increased dramatically in recent years.

— By Ram Yamarthy

LUNCH FEATURING YOUNG RESEARCHERS

Three former MFM fellowship awardees will speak at lunchtime.

Dejanir Silva, UIUC, “The Risk Channel of Unconventional Monetary Policy”

Divya Kirti, IMF, “When Gambling for Resurrection is Too Risky”

Matteo Crosignani, Federal Reserve Board, “The Effect of Central Bank Liquidity Injections on Bank Credit Supply”

 

SESSION II: Stress Testing

Luc Laeven, “Stress Testing and Systemic Risk.” Discussant: Matthew Pritsker

Rohan Churm, “Stress Test Modeling at the Bank of England: Present and Future” Discussant: Nobuhiro Kiyotaki

Stress tests have become an important regulatory tool helping policymakers to assess whether banks have sufficient capital to deal with an economic crisis. Both regulators and academics work to achieve the best design for stress tests.

Thomas Phillipon (NYU) and his coauthors are the first to provide a comprehensive evaluation of the quality of stress tests. They ask an important question of how well European banks were able to assess their future risks at the end of 2013 under different stress scenarios.

By comparing the realized losses of banks with the predicted loss rates, they conclude that stress tests are indeed informative and provide useful guidance on the resilience of banks to severe economic conditions.

Discussant Martin Schneider (Stanford) emphasized the challenge of the performed analysis, noting that the exact theoretical scenario
of a stress test is never realized in practice. He suggested introducing a full spectrum of macroeconomic scenarios when stress testing banks rather than restricting attention to only a few.

In the same vein as Schneider¹s suggestion, Harald Uhlig (University of Chicago) proposed having an automated transmission system in place where individual banks submit all required data to regulators electronically and regulators are the ones who perform the stress analysis under numerous macroeconomic scenarios. A challenge in implementing such an approach would be standardizing the format of data submission as the data would come from multiple countries.

Next presenter Luc Laeven (European Central Bank) highlighted the macroprudential function of stress tests and called for greater focus on the role of bank capital as a risk mitigant, not only as a buffer against losses resulting from adverse economic conditions.

The recent financial crisis has reinforced the need to complement existing diagnostic tools with timely measures of systemic risk. Laeven and coauthors propose a new easy-to-implement way to measure the systemic risk of financial institutions building on the Merton model. They find that the key drivers of systemic risk are bank size, leverage, and asset risk.

Discussant Matthew Pritsker (Federal Reserve Bank of Boston) pointed out that the Merton style framework may not capture the network interconnectedness until it is too late; as such it is dangerous to rely on a single systemic risk measure.

In his presentation, Rohan Churm (Bank of England) discussed the stress test approach implemented by the Bank of England. There, stress tests are conducted both at the level of individual banks and the regulator: each bank uses its own model to create loss projections under macroeconomic stress scenarios.

These projections are then subject to peer comparisons, as well as to a review by the regulator according to its own challenger model.
The regulator¹s model aims not only to account for network effects, but also to incorporate feedback and amplification mechanisms.

Discussant Nobuhiro Kiyotaki (Princeton) further recommended that when designing the stress test scenarios it is crucial to include the scenario of persistent fall in growth rates.

Andrew Lo (MIT) concluded the session by raising a fundamental question – whether regulators use information gained through stress testing to achieve macroprudential financial regulation goals. For example, will regulators impose discretionary capital charges for banks based on the stress test results?

This session thus provided useful insight on not only the challenges of designing stress tests and ways to overcome those challenges, but also on practical matters to do with how best regulators can use the knowledge gained through stress testing.

— By Tetiana Davydiuk

SESSION III: Accounting and Financial Regulation

Christian Leuz, “Accounting and Financial Stability: Real (and Not So Real) Problems during the Crisis and Challenges Going Forward” Discussant: Tanos Santos

This session addressed the question of how accounting and disclosure rules facilitate the provision of accurate information by financial institutions. The sessions’ participants stressed the importance of timely and reliable disclosure of financial information to market participants and regulators.

Henry Hu (University of Texas) argued that financial innovations require us to rethink disclosure requirements for financial institutions. He argued in favor of a transfer mode of disclosure that involves more transfers of the underlying raw financial data that can be assessed directly by regulators and market participants.

There are several pitfalls to the current descriptive disclosure processes that instead involve banks reporting their assessment and interpretation of the underlying data, according to Hu. These pitfalls include a potentially incorrect assessment of risk that can arise either from the bank’s lack of understanding of the risk-return characteristics of new financial assets, or from the limited information-sharing within the bank. The advantages of shifting towards banks reporting more of the underlying raw data was echoed by discussant Zhiguo He (University of Chicago) as well as in comments by the audience.

Christian Leuz (University of Chicago) challenged the common view that fair-value accounting paired with market price volatility magnifies financial crisis. He argued that mark-to-market assets not only represent less than 10 percent of assets on banks’ balance, but that these exposures are partly insulated from negative impacts of fire-sales by existing accounting rules.

Instead, Leuz and his discussant Tano Santos (Columbia University) flagged the risks stemming from (a) funding risk that arise when fluctuations in asset values reduce the ability of a bank to roll over short-term funding; (b) adverse incentives induced by different regulatory treatment of trading and banking book exposures; and (c) slow corrective action resulting from the lack of impairment recognition.

Stephen Ryan (New York University) addressed how accounting rules affect real economic outcomes. He presented evidence from a recent paper that a tightening in the accounting rules governing securitizations and the consolidation of securitization in the US led to a reduction in credit originations but increased loan sales at affected banks.

In her discussion, Anne Beatty (Ohio State University) addressed an important debate on how accounting rules on loan loss provisions can affect banks’ regulatory capital and credit supply. Ryan cautioned that some existing research, including research on the impact on accounting rules governing loan loss provision, may overstate the impact that accounting rules can have on bank ’s regulatory capital adequacy.

— By Laura Blattner

Lunch and Young Scholar Presentations

Pablo Daniel Azar, MIT, “Social Learning in Financial Networks”

Matteo Benetton, LSE, “Lenders’ Competition and Macro-prudential Regulation: A Model of the UK Mortgage Supermarket”

Ram Yamarthy, UPenn, “Corporate Debt Maturity and the Real Economy”

Session IV: Examining the Housing and Credit Market

Greg Kaplan; Discussant: Christopher Sims

Daniel Greenwald; Discussant: Monika Piazzesi

US mortgage markets represent about 70 percent of total household credit, amounting for more than half of annual GDP. Hence, understanding the dynamics of these markets is key to understanding the boom and bust that surrounded the Great Recession, which was the subject of Session IV of the MFM 2017 winter meeting.

The session started with Greg Kaplan (University of Chicago), whose project aims to uncover the nature of the economic shocks responsible for the observed economic dynamics before and after the Great Recession, through the lens of the mortgage markets. These shocks are required to generate movements coherent with aggregate variables such as consumption as well as developments in the housing and credit markets.

Using a model that features both leverage constraints and an explicitly modelled housing market, researchers concluded that changes in expectations about the housing prices must have been a key source of the observed boom-bust cycle. The observed facts—specifically the boom in house prices and consumption–could not be explained by changes in the leverage constraints faced by borrowing households alone, Kaplan found. Rather, changes in credit conditions are important to amplify shocks and explain foreclosures. This has an important implication for policy: tougher credit limits and debt-forgiveness could not have been expected to prevent the boom-bust in aggregate consumption and output.

Discussant Christopher Sims (Princeton University) praised the interesting perspective given on the causes of the Great Recession. He noted that if the true drivers of the boom-bust cycle were beliefs – as false as such beliefs might have been – then policy could have done little to prevent what was, after all, a cycle driven by rational expectations. He emphasized, however, that understanding the developments of changes in housing prices beliefs will be an important agenda for future work, and that financial frictions could play a larger role if the beliefs were allowed to differ across market participants. Another participant wondered about whether the level of optimism implied by the beliefs necessary to generate the boom-bust cycles were reasonable.

Daniel Greenwald (MIT Sloan) then presented work that dissected how the structure of the mortgage market affects the transmission of shocks to the economy, particularly the transmission of monetary policy shocks. He emphasized two ingredients of his modelling approach: the difference between loan-to-value (LTV) and payment-to-income (PTI) leverage constraints, as well as the possibility for mortgage owners to prepay their loans following an easing of borrowing conditions.

The presenter’s primary finding is that these two ingredients amplify the transmission from interest rates into house prices and economic activity. Consequently, monetary policy has a larger influence on prices as well as credit. The second main finding is that the relaxation of PTI – and not so much LTV – constraints appears as essential to explain the dynamics of the boom-bust correctly, reaffirming the results of the previous presentation that PTI constraints are useful to amplify the effect of changes in housing price expectations. Finally, from a macroprudential policy perspective, he finds that a cap on PTI is quantitatively more effective to limit credit cycles than a LTV cap.

Monika Piazzesi (Stanford University) reinterpreted the findings in the context of a simplified macroeconomic model, showing that the assumption of a segmented housing market between borrowers and savers was important for the results. She provided evidence from actual housing market data that the assumption seemed not strongly violated. She raised concerns that the simulated paths from the model used to derive the main findings ignored important developments during the preboom periods such as low interest rates, and could be inconsistent with the low observed volatility in interest rates. Another participant remarked that PTI and LTV constraints should be determined jointly by loan issuers, and wondered what would be the consequences of that in the model.

— Yann Koby

Session V: Microeconomic Evidence on the Impact of Financing Restrictions and Government Policy

While the economy continues its slow but steady recovery out of the Great Recession, it remains unclear which forces turn a macro-financial shock into the sharp decrease in employment observed between late 2007 and early 2010.

Prominent literature in this field offered empirical evidence in support of different channels. The work of Mian and Sufi (2014) argue in favor of a household demand channel triggered by deleveraging in response to the drop of house prices. Alternatively, Chodorow-Reich (2014) emphasizes the role of the firm credit channel, whereby the disruptions in the US financial system raised the cost of external
financing for credit-constrained firms, leading these firms to shed workers. Other studies highlight the role of uncertainty and heightened risk aversion of firms (e.g. Baker, Bloom and Davis, 2015).
Neal Mehrotra (Brown University) and Dmitriy Sergeyev (Bocconi University) bring new evidence into this debate by analyzing the channels of transmission through the lens of a quantitative model that brings some of these strands together. The authors propose model of firm dynamics where heterogeneous producers are subject to three kind of shocks: a collateral shock that captures the effect of supply-driven credit constraints; a productivity shock that mimics a reduction in consumer demand; and a discount factor shock that captures changes in risk premia triggered by increased pessimism and economic uncertainty.
Calibrating the model to US data, Mehrotra and Sergeyev find that 18 percent of the decline in US employment during the Great Recession can be attributed to the firm credit channel, while the demand channel can explain up to 26 percent of the decline. Notably, they find that the discount factor shock played a major role in explaining the link between the financial crisis and labor market outcomes—more than 50 percent, according to their calculations.
In his discussion, John Cochrane (Stanford University) highlighted that the change in risk aversion matters a great deal for understanding the dynamics that led to the Great Recession, despite being often overlooked by the recent empirical literature in this topic.
If the forces driving the significant employment loss experienced during the Great Recession remain a subject of debate, it is even more unclear how government intervention could effectively attenuate the real effects of adverse macro-financial shocks. Brian Melzer (Northwestern University) and coauthors take up this important question with the goal of providing guidance for the design of future policy interventions.
Their research starts from the empirical observation that previous studies have found quite low response to fiscal policies aimed at stimulating consumption. In their paper, they argue that financial constraints are one potential impediment to program participation. Even generous stimulus programs that subsidize purchase of goods and services might have limited take up rate if agents lack the liquidity to make a down payment or copayment; buyers’ debt capacity may be insufficient to secure a loan to cover the nonsubsided portion of the purchase, or they are unwilling to increase their leverage.
To cast light on these issues, the authors study the Car Allowance Rebate System (CARS) program–colloquially known as “Cash for Clunkers.” Under the program, the US government provided rebates of $3,500 to $4,500 to consumers who traded in and scrapped old, fuel-inefficient automobiles (“clunkers”) and purchased new, more efficient ones during July and August of 2009.
Unlike other stimulus incentives such as the contemporaneous first-time home purchase program, rebates were paid at the time of the transaction, rather than as credits on future households’ tax returns. Therefore, CARS rebates provided liquidity in addition to a price subsidy. This feature allowed the authors to separate the effect of liquidity from that of the economic subsidy.
Exploiting a quasi-experimental design that compares car purchases of owners of vehicles that qualify as “clunkers” to those of individuals with cars that are “almost clunkers” but do not qualify, Melzer and coauthors find that the liquidity provision embedded in the CARS’ design was a crucial driver of the program successful take up rate.
— By Simone Lenzu