The 2013 Macro Financial Modeling Spring Meeting was sponsored by the Alfred P. Sloan Foundation, the CME Group Foundation,  and the Institute for New Economic Thinking,

Agenda

Thursday, May 2, 2013

A Macroeconomic Framework for Quantifying Systemic Risk

Presentation

The authors derive a macroeconomic model in which an occasionally-binding financial constraint leads to endogenous financial crises. They solve the model globally, in order to fully capture all its non-linearities; 
to accomplish this the model must be smaller and more stylized than typical macroeconomic models which 
feature many more shocks, but approximate the model solution around a non-stochastic steady state. 
In the model, households hold wealth in an intermediary sector that holds all productive assets and 
make all investment decisions on their behalf; they are assumed to require that a minimum amount of their
wealth be held in the form of debt claims. Capital structure matters in this economy because intermediaries
maximize a quadratic objective in their expected return on equity; this implies that for higher levels of 
leverage they are more risk averse.

Crises arise in the model because of a financial constraint: intermediaries have a time-varying limit 
(which the authors call their “reputation”) on the amount of capital they can raise, which depends on their 
past returns. When they are far from the limit, negative shocks have small real effects because they can re-capitalize easily. However, when the representative intermediary is close to its capital limit, it can only reach their desired capital structure by de-leveraging. In general equilibrium this has large effects on asset prices and investment.

The authors calibrate the model and show how investment, Sharpe ratios, and asset prices react to 
shocks both when the constraint is binding and when it is not. To compare the model to the data, they 
map the model’s “distress periods” in which intermediary reputation is low (bottom third of its stationary 
distribution) with similar times in history by comparing it to Gilchrist & Zakrajsek’s “excess bond premium” 
(which was the subject of the first session of the September 2012 MFM meeting). They capture the model’s non-linearity by looking at covariances between variables in distress periods and in normal times, and find that, as in the model, in distress periods everything is more volatile and covariances are higher. The exercise is informative because without the capital constraint, which is the model’s central feature, variances and covariances would be the same in all states.

Finally, the authors simulate a “stress test” inside the model, in which they investigate the probability 
of a financial crisis given a low value of intermediary reputation. Calibrated to the data-matched value in 
the second quarter of 2007, they find a relatively low probability of a crisis in response to a bad shock over the next five years. However, if the intermediaries’ leverage ratio was actually 4.5, instead of the fixed level of 3 that agents expect, the probability of a crisis after a modest negative shock jumps dramatically. The authors interpret this as the effect of “hidden leverage,” such as structured investment vehicles (SIVs) and repo financing, which a ffects outcomes but may not be known and understood by all market participants.

Discussion

Christopher Sims, Princeton University, outlined three broad things for which a macroeconomic model could be used: (1) to forecast, (2) to predict the effects of policy changes, and (3) to tell a believable story about what shocks matter and why policy works (or doesn’t). The third use is especially important for conducting welfare analysis. He drew an analogy to an older macroeconomic literature that, in an e ort to overcome monetary neutrality, relied on the New-Keynesian Phillips curve and price stickiness, both of which were \micro-founded” with a story about rational agents setting prices according to Calvo constraints. Likewise, this model overcomes financial neutrality—the Modigliani & Miller (1958) theorem—by imposing an equity-issuance constraint that can also be “micro-founded.” But as with Calvo pricing, Sims questioned whether this story is really believable as the main source of non-neutrality.

He continued the analogy by arguing that what these types of models need most is a descriptive empirical 
framework against which their predictions can be judged, much like structural macroeconomic models are 
judged against vector autoregressions (VARs). He pointed out that this model features a single state variable with which everything is perfectly correlated, which complicates this task. In addition, the model predicts that some variables become important only in certain states of the world; the associated non-linearity or state-dependence means that traditional linear-Gaussian empirical methods, such as structural VARs, are not feasible candidates.

Andrew Lo noted that most of the action in the model comes from the equity issuance limit; he wondered 
why that limit should be a function of past performance as assumed in the model. After all, Lehman 
Brothers had had very good past performance, but was still not able to raise equity when the crisis hit. 
He also wondered whether financial and macro-economists agree on what facts from the crisis we should be trying to explain, or even on a common narrative outline of the crisis. He theorized that financial economists tend to think in terms of the housing market, illiquidity, over-leverage, and certain government subsidies, while macroeconomists might think of the crisis in terms of monetary policy, interest rates, and the fact that standard macroeconomic models have no financial sector.

Stephen Cecchetti pointed out that academic work on financial crises has been too focused on a single 
data point, the last financial crisis in the United States. He argued that including other crisis experiences, 
which are actually quite common in many countries around the world, could help us to find commonalities 
and narrow down the list of facts that need to be explained and how helpful any particular model is in 
addressing them.

Frank Smets suggested that the crucial ingredient in this model is the risk-taking capacity of the financial 
sector, and that getting robust data or variables that capture this feature is especially important. He urged modelers to include this data in both structural and reduced-form models. He also argued that this model is driven by capital constraints, but that bank runs and illiquidity are also important elements of the financial crisis, and that perhaps the two interact. Adding these to the model represents a significant challenge, however, because it involves multiple equilibria and expectations coordination.

Nellie Liang observed that risk-taking in the model is really bank insolvency; she wondered if maybe the 
crisis emphasizes that current capital ratios are not adequate and also do not give a good indicator 
of the potential for crisis. She speculated that stress tests looking at underlying or forward-looking rather 
than balance-sheet capital might serve as better indicators.

Getting at Systemic Risk Via an Agent-Based Model of the Housing Market

Presentation

The authors build an “agent-based” model of the housing market in metropolitan Washington, D.C. over the 
1997-2009 period. The goal of the present paper is to build a model that can reproduce many of the facts of the housing boom and bust during the most recent episode, and especially how they relate to interest rates and leverage.

The strategy is an “agent-based” one; although all economic models feature “agents,” Howitt observes that the term comes from computer science and multiple-agent systems, in which an agent is an autonomous piece of software that interacts with others in ex ante unforeseeable ways. The advantage of such an approach is that it need not assume the existence of an equilibrium; one might emerge endogenously, in which case it can be inspected, or it might not. In this sense, the first “agent-based” models in economics were the simple location models used by Thomas Schelling to show how very slight preferences for living near others of the same race can lead to drastically segregated communities. Schelling’s model was meant to be qualitative, but agent-based models have also been applied quantitatively, to model traffic behavior in cities or to predict mortgage refinancing.

An advantage of the agent-based approach, other than the obvious one of being an additional tool with 
which to tackle the problem, is that it can directly account for heterogeneity among agents in ways that 
equilibrium-based models frequently cannot for computational or tractability reasons. This is especially 
valuable in the data-rich setting of the current project; although the authors lack complete individual-level 
panel data of all variables of interest, they are able to include household-level transaction data, loan-level 
performance data, data on individual house listings, anonymized tax-return data, Census data from the Panel Study of Income Dynamics and the American Community Survey, as well as standard aggregate variables.

The model simulates income, wealth, housing, and financial decisions of 22,000 households (a 100:1 
scale for Washington, D.C.) as well as the distribution of housing quality (defined as the sale price relative to the Case-Shiller index) and the foreclosure and sale decisions of a representative aggregate bank over the years 1997-2009, taking as given demographics, the income process, the housing stock (there is no construction in the model), and bank behavior (including interest rates, types of loans, and credit constraints). Rather than picking all parameters to fit target data moments as in the usual calibration exercise, the authors calibrate individual model components separately using micro-data, and then throw them all together in the simulation and see what happens.

In summary, after briefly describing households’ desired housing expenditure, one of the many components of the model, Howitt presented a comprehensive “dashboard” of the model’s predictions for the 1997-2009 period after running a 100-year “burn-in” simulation period to allow the model to reach a steady state. The model faithfully reproduces the boom and bust and housing prices, as well as the ratio of sold price to initial list price, but fails to match home-ownership rates or the number of house listings. The real strength of the approach, however, is the ability to run counterfactuals; the same model in which leverage constraints are drawn from their 1997 distribution, rather than relaxing as they did in the data, eliminates most of the run- up in house prices in the simulation. A comparable exercise fixing interest rates at their 1997 distribution eliminates some of the bubble, but not nearly as much of it.

Discussion

Amir Sufi interpreted the paper as a novel approach to using micro-data to answer macroeconomic questions. He was also excited by the quantitative nature of the model’s counterfactuals, which can be difficult to estimate using more reduced-form estimation strategies. One challenge of the agent-based modeling strategy, however, is communicating clearly which variables are assumed to be exogenous vs. endogenous. He also worried that the authors approach the problem assuming that loan-to-value (LTV) constraints are the primary non-price variable affecting housing decisions, rather than (for example) debt-to-income ratios.

The manner in which the model allows for the direct modeling of heterogeneity is useful, Sufi continued, 
since standard macroeconomic models rely heavily on the assumption of a representative agent, and it is 
difficult in that setting to even understand what debt is, let alone non-price debt terms such as LTV or 
collateral constraints. The agent-based approach models debt directly, but is weaker along other dimensions. For example, he urged the authors to think carefully about general equilibrium and how to tie it into their analysis; without careful analysis of how model results will fare in general equilibrium, macroeconomists will never take the method or agent heterogeneity seriously.

Echoing Sufi ‘s comments, Christopher Sims noted that exogeneity/endogeneity is absolutely critical in determining the worth of the model. After all, simple linear regressions can produce a fit as good or better as the one Howitt showed, if enough exogenous variables are included. He added that one way to analyze the  model along these lines is to see how well an alternative model can t the data using the same exogenous variables.

John Cochrane noted that the approach allows for a wide variety of dimensions along which agents make decisions; however this is at the cost of not solving for the optimal decision rule in each situation. He questioned the robustness of the results to errors in the decision rules, and noted that earlier economic analysis that relied on empirically-sound but economically-flawed decision rules had lead to bad conclusions in the past.

Lars Hansen pointed out that another big advantage of the agent-based model is that it models heterogeneity in a computationally-simple way. He would have liked to see some of the cross-sectional implications of the model, including how distributions of variables of interest are evolving and reacting to shocks over time, rather than just the aggregate implications. He also said that transparency of the inputs to a model is critical; although the size of the model makes this a di_cult task, the authors need to make clear how parameters are chosen, including whether or not there are crucial tuning parameters that drive the model’s fit.

Serguei Maliar and Frank Smets both wondered how expectations fit into the analysis. Maliar argued 
that expectations play an important part in all financial decisions, especially housing ones, because the 
purchase decision depends crucially on the expected path of future house prices. He said that it should be straightforward to add some forward-looking variables to the decision rules that agents use in the model. 
Smets added that heterogeneity in expectations could be very important for explaining the housing bubble, 
including potentially how agents interact with each other to form their expectations.

Anil Kashyap wondered if the model could be used to address whether stock variables, such as 
debt to income, or flow variables are more important in creating bubbles. He mentioned an intense 
policy debate going on right now in Sweden, in which there has been a large run-up in house prices but 
policymakers disagree as to whether it represents a bubble about to burst—suggesting a need to tighten 
monetary policy—or whether the current low debt-service costs and high unemployment call for looser monetary policy.

Overborrowing, Financial Crises and 'Macro-prudential' Policy

Presentation

The authors construct a quantitative equilibrium model of business cycles and asset prices to understand how “overborrowing” can contribute to systemic risk and make the economy more prone to crisis. The model features an occasionally-binding collateral constraint tied to the market price of capital and can generate episodes that look very much like business cycles: when the constraint binds, re sales in asset markets constrain the ability of rms to produce and create a recession, which then feeds back into asset prices and tighten the constraint as in the classic “financial accelerator” mechanism.

The contribution of the paper is to characterize the constrained efficient allocations under both commitment and discretion and compare these to outcomes possible with macro-prudential policy using various instruments. In particular, the authors show that the optimal policy is a state-contingent tax on debt that is roughly 1% on average; with this policy, a macro-prudential regulator can lower the probability of a financial crisis by a factor of 3, as well as stabilize asset prices and lower Sharpe ratios. Simpler tax schemes than the one that delivers the constrained efficient allocations can also deliver significant benefits.

Bianchi presented a simplified version of the model in which interest rates are exogenous and capital (land) is in fixed supply. He analyzed the decentralized equilibrium and compared it to the central planner’s problem, illustrating the central externality in the model: borrowing less today can avoid a sharp drop in asset prices tomorrow (if the constraint binds), but because all agents receive the bene t tomorrow but only individuals pay the cost of decreased consumption today, markets do not coordinate this activity.

The model also features a time-consistency problem on the part of the planner: she would always like to promise lower consumption tomorrow to avoid the crisis, but when tomorrow arrives she has an incentive to renege on that promise. The authors address this problem by assuming that the planner today cannot directly control the choices of the planner tomorrow, and must instead assert indirect control through her borrowing choices. Under this assumption, the planner can improve welfare by reducing leverage today, which reduces the probability of crises in the future. It also seems as if the planner can increase welfare when the constraint is binding today by promising to lower future consumption, which raises current asset prices, but this policy is not time-consistent.

The expanded model includes firms, labor supply, and working capital (also subject to the collateral constraint). The authors calibrate this model using data from industrialized countries to generate a 3% annual unconditional probability of crisis. Bianchi then presented some plots from the calibrated model showing the increased precautionary saving done by the planner, especially how it is small in good times but can be quite different from the decentralized equilibrium when a bad shock hits. In addition, prices, output, and consumption are much more stable with the social planner than in the decentralized equilibrium. Risk premia (excess returns) are also lower.

Discussion

John Cochrane compared this model to that presented by Krishnmurthy in the first session, noting that both feature collateral constraints but that the other paper focuses on constraints by intermediaries and frictions in financial markets, while this paper still assumes away the financial sector and places the constraint between households needing to borrow to consume and foreigners financiers who want to loan to them. He questioned whether the central aspect of the financial crisis and subsequent recession was a collateral constraint impeding the smoothing of consumption via financing from abroad. Instead he suggested that an entirely different model, that of a run on short-term debt that leads to a fall in aggregate demand, might be closer to the real story.

He went on to note that it is becoming common in macroeconomic models of the crisis to simply add constraints to standard neoclassical models and explore the implications. This is problematic because real people when faced with costly constraints figureout ways around them: for example, they save to avoid hitting the constraint, or they issue equity or long-term bonds, or they even create new financial instruments. He remarked that the central problem in the model is not the “overborrowing” done by agents relative to the central planner, but the existence of the collateral constraint in the first place. Under this interpretation, the central planner should be working to get rid of the collateral constraint, rather than limit borrowing.

Cochrane also added that the authors assume that the central planner is not only subject to the same collateral constraint as the agents (this is standard), but also to the pricing function that arises in the decentralized equilibrium; this is what allows them to model the planner’s choice of debt taking into account future bond prices, which agents in the decentralized economy ignore (the pecuniary externality). This theoretical aspect of the model could be better motivated since it is not clear why central planner faces any prices at all.

Arvind Krishnamurthy advanced the idea that the model could be re-interpreted to be about firms, rather than consumers. In this re-interpreted model the state-contingent debt tax advocated by the authors could take the form of an equity requirement; Krishnamurthy wondered what the calibrated model says the “correct” equity requirement should be.

Serguei Maliar questioned whether the first-order conditions to the planner’s problem really give the correct solution. He noted that in some models with hyperbolic consumers the planner’s problem has multiple equilibria and the “smooth” solution which the authors use does not yield the highest utility.

Lars Hansen noted that the model relies on a pecuniary externality, and is perhaps best-suited to measuring the size of that externality; this is the task for which solving the planner’s problem is the appropriate tool. However he doubted the value of the welfare and macro-prudential policy analysis that comes out of the planner’s problem in this model, echoing Cochrane’s comment that the best policy in this model is one that relaxes or eliminates the collateral constraint. A model to really understand the proper macro-prudential policy must have a believable and easily-interpreted friction.

Jaroslav Borovicka observed that average leverage ratios under the central planner are only about one percentage point lower than in the decentralized equilibrium; he questioned whether such a small change in aggregate leverage ratios could really achieve such large changes in the probability of financial crisis and welfare.

Friday, May 3, 2013

Financial Crises, Bank Risk Exposure and Government Financial Policy

Presentation

Kiyotaki and coauthors write a model to analyze how a negative shock can propagate through the balance sheets of banks and create a financial crisis. They also investigate why the banks choose risky portfolios in the first place, and if this choice may be due to anticipation of credit policy actions by the government. They also use the model to ask: whether credit policy can mitigate the crisis, and how to reduce the moral hazard of banks in the choice of their risky portfolios.

Unlike many other models of the crisis, there are no productivity shocks: the main source of uncertainty is a capital quality shock, which in the model operates as if part of the capital stock can be wiped out every period. In addition, there is an investment adjustment cost, and labor markets are competitive. A key point is that the authors assume that investment is financed through equity, which is sold to bankers, and there are no frictions between banks and firms. Bankers in turn issue pass-through securities to households. Finally, through leverage the banker can try to keep some of the production for consumption purposes.

The authors consider two policies: first, credit policy in the form of government financing, and second, macroprudential policy in the form of an implicit tax on the risky asset holdings of the bankers. With these policies in mind they proceed to compare the evolution of a low-risk (low leverage) economy and a high-risk (high leverage) economy, with and without the government policies, following a capital quality shock.

They find that credit policy alone can reduce the impact of the shock in a low-risk economy but not in a high-risk economy. The reason is that financial intermediaries take on too much risk in anticipation of the policy, eliminating the policy effect. To mitigate this, the government can also implement macroprudential policies. The authors show a better evolution of the high-risk economy when both policies are in place.

The conclusion is that credit policy by itself is not enough if the government is not efficient in terms of resource waste. On the contrary, macroprudential policy provides welfare gains when implemented by itself, but the welfare gains are bigger when both policies are applied jointly.

Discussion

Franklin Allen pointed to the importance of interrelating financial stability and macroprudential policy. He said the paper makes important contributions because it incorporates financial frictions into a state-of-the-art quantitative macro model. A key feature of the model, Allen noted, is that it can explain why banks choose high leverage. The challenge, however, is that the financial friction in the model is too simplistic; this is necessary to keep the macro model tractable. However, it is questionable whether such a simple financial friction as the one in the model is what actually goes wrong during a financial crisis.

The authors attempt to study an issue that has been at the heart of the financial crisis discussion: Is borrowing/lending lower than what the economy needs to be in good health? Allen remarked that there is very little evidence that allows us to understand the source of this low credit activity—is it a supply (banks) problem or a demand (firms) problem?

Allen went on to list a set of factors that are sources of systemic risk. He suggested we focus on two: the fall in the price of bank assets due to sharp drops in real estate prices and the fall in the price of bank assets due to increases in interest rates. Regarding real estate prices, he showed large differences in behavior across countries even when such countries have a lot in common. He argued that to understand financial crises, we first need to understand the behavior of real estate markets, as they may play a crucial role in banks’ balance sheets.

Lars Hansen wondered about the identification of shocks and their importance for the evolution of the economy. He finds it hard to grasp the capital quality shock and its relevance for analyzing the crisis. Kiyotaki argued that the shock is a reduced form shock but it gets some of the co-movement among macro variables in the right direction.

 

Merging Simulation and Projection Approaches to Solve High-dimensional Problems

Presentation

Serguei Maliar and his co-authors put forward a new method, the epsilon distinguishable set  (EDS) technique, for solving dynamic economic models with high dimensionality (many state variables), strong non-linearities, and inequality constraints.

The idea is to combine the EDS technique with standard projection methods to solve the problem more efficiently. The first step is stochastic simulation that creates a grid on which to approximate the solution. The main contribution of the EDS method is to determine this grid. The second step is to solve the model on the grid using the standard projection approach.

To illustrate how the method works, the authors solve the canonical neoclassical growth model. They show how the projection method by itself suffers from intractability once there are more than three shocks or more than three state variables. The simulation method, however, is powerful enough to find the grid to solve the model. The main advantage is that there are large gains in computation efficiency when the dimensionality grows. However, the drawback of simulation is that the accuracy of the solution is limited because of the low accuracy of Monte Carlo integration methods.

The solution is to essentially use each method where it is most efficient. The simulation method is used to find the set for the solution, while the projection method is used to find accurate solutions. An additional advantage of the method [which method? Simulation? is that it corrects itself in terms of finding the grid, even if we start far away from the correct region for the solution.

The conclusion is that the proposed EDS method allows researchers to solve problems that were unfeasible to solve before. It is superior in terms of computer efficiency, accuracy, and handling of non-linearities than alternative methods, and it also provides global solutions. For example, they have used the method to find a global solution to a multi-country model with more than 200 state variables in about two hours on a laptop computer.

Discussion

John Birge highlighted the potential of the method to get effective solutions for high dimensional problems in a computationally-efficient manner. He did have some concerns, however. One is that there is no convergence proof for the method, so we do not have a good idea of when the method works. Another problem may be the failure of fixed-point iterations but there are alternative ways to get around it.

He also pointed at some cases in which alternative methods such as Smolyak grids may be better to use, and suggested some ways of improving the method including the use of some properties of the problem to improve approximations and Bayesian methods to reduce variance. He also suggested exploring the possibility of measuring and providing error bounds.

One question that arose during the presentation was whether the method could produce more accurate results if the grid was more populated. Serguei Maliar responded that the solution should be accurate in the relevant states, which the method can achieve.

Regarding the convergence problem, Maliar argued that some of those questions are not specific to the EDS method. They are a more general issue of dynamic models and we do not have answers yet. With respect to the cases in which the model may not work, he said that it is not hard to construct examples where it fails, for example if one starts the solution with a guess that is vastly wrong. Therefore, the method is not fully automatic. However, there are properties of the problems that we care about, such as non-linearities and inequality constraints, that the method can handle automatically.

 

Student Poster Session

Sovereign Default Risk and Real Economic Activity
Luigi Bocola

In this paper we propose a framework to quantify the output losses of a rise in the probability of a domestic default. We consider a Real Business Cycle model with two major ingredients: a financial sector and sovereign default risk. In the model, financial  intermediaries are subject to occasionally binding debt constraints that affect their ability to obtain funds from households. The government can default on its debt obligations, and we model the probability of this event as an unrestricted functional of the  model’s state variables.

An unexpected increase in the probability of a sovereign default can generate deep and persistent output losses. Indeed, as the economy approaches states of the world where a government default is more likely, intermediaries that are  exposed to sovereign debt face increasing difficulties in obtaining funds. The resulting credit tightening lowers output. This initial impulse propagates as depressed economic activity and increasing public debt feed-back into the probability of a default,  thus generating a vicious circle. We develop an algorithm for the global numerical solution of the model and we estimate its structural parameters using quarterly Italian data. We use the estimated model to assess the macroeconomic implications of the rise  in Italian bond yields during the summer of 2011 and to measure the size of the government spending multipliers.

 

Macroeconomic Models for Monetary Policy Makers: a Critical Review
Winston Dou

We examine the core models employed by central banks to forecast economic activity and to analyze their monetary policies. We focus on the US Federal Reserve (FRB/US, SIGMA, EDO), the European Central Bank (AWM, NAWM, CMR), the Bank of England (BEQM, MTMM), and the Bank of Canada (ToTEM, QPM, RDXF), as well as their relatively new financial stability models (e.g. the RAMSI model). We summarize the advantages and disadvantages of each model and the common issues across all models. We believe this critical review will shed light on improving macroeconomic models for monetary policy makers.

 

Unregulated Financing I: Implicit Guarantees
Ji Huang

Implicit guarantees are to shadow banking as deposit insurance is to traditional banking. However, the enforceability problem embedded in implicit guarantees, as a financial friction, yields a credit constraint for shadow banking. By introducing this friction, we capture both continuous and discontinuous financial instability caused by shadow banking in a continuous-time general equilibrium setup. The interaction between the re-intermediation, the market risk, and the credit constraint for shadow banking forms an amplification mechanism unique to economies with shadow banking, which intensifies the continuous financial instability.

The adaptation of the baseline model obtains the discontinuous instability. The instability is discontinuous in the sense that a modest shock can cause the collapse of the shadow banking system during upturns, at which time the expansion of shadow banking exposes the economy to risks of systemic runs.

 

Optimal Macroprudential Policy
Andrea Prestipino

I study the properties of optimal financial regulation in an environment in which limited commitment of financial intermediaries makes the competitive equilibrium constrained inefficient. The paper sheds light on the properties of banks’ capital requirements that implement the second best and on the welfare gains associated with it.

 

Financial Crisis and Systematic Bank Runs in a Dynamic Model of Banking
Roberto Robatto

I present a model of financial crises and systemic bank runs in an infinite-horizon, monetary model of banking, where deposits contracts are specified in nominal terms. During a financial crisis, the model features a deflationary spiral that amplifies the crisis and can generate a multiplicity of equilibria. The fear of runs and banks failure create a motive for precautionary money holding, reducing deposits at banks; since banks collect less deposits, they want to reduce their asset holdings, depressing the demand for assets: in equilibrium, the price of assets decreases to clear the asset market. The low price of assets amplifies the insolvency of the banking sector, exacerbating the crisis.

The model generates a flight to liquid and safe assets, a drop in the velocity of money and a run on some banks, similar to some events of the 2008 US financial crisis. The model can be extended to analyze monetary policy (money injections by the Fed), fiscal policy (equity infusions in banks), the supply of loans to the real sector during a crisis, 19th century banking panics, and for quantitative analysis.

 

The Pricing of Disaster Risk and the Macroeconomy
Emil N. Siriwardane

I analyze a model economy with rare disasters that yields a theoretically-grounded firm-level measure of disaster risk that can be extracted from option prices. Specifically, I develop an option strategy that reflects only the risk inherent in the disaster states of the economy. My disaster risk (DR) measure proxies for the ex-ante disaster risk of a firm’s stock, thereby circumventing the difficult task of using historical equity returns to estimate disaster exposure.

Empirically, I find that a monthly equity portfolio that purchases firms with high DR earns excess annualized returns of 15.75%, even after controlling for standard Fama-French, momentum, liquidity, and volatility risk-factors. Moreover, the model suggests how to use my DR measure to infer the risk-neutral probability of a consumption disaster.  Cross-sectionally, assets with higher DR demonstrate higher price sensitivity to changes in the probability of a consumption disaster. In addition, the theoretical foundation for disaster risk implies a broader relationship between disasters and the macroeconomy.  I confirm this intuition via a set of forecasting regressions in which an increase in the probability of disaster predicts an increase in unemployment and a decrease in investment.

 

Panel Discussion, Spotting Weaknesses, Building Strengths: Economic Policymakers Share Their Challenges

The world’s central banks work every day to identify and manage the risks that can wreak havoc in the global economy. Their research arms are taking that a step further, seeking tools and structures to build economic resiliency to prevent future financial crises.

Leading researchers from the International Monetary Fund, Bank of International Settlements, European Central Bank, and Federal Reserve Bank shared the questions and needs they face with scholars at the meeting of the Becker Friedman Institute’s Macro Financial Modeling Group in Chicago May 2-3, 2013. The MFM group is working to develop macroeconomic models that explain how financial sector disruptions are transmitted to the economy—and help develop the next generation of policy tools to deal with such crises.

Stephen Cecchetti, economic adviser and head of the Monetary and Economic Department at the Bank of International Settlements, noted that the models presented throughout the meeting incorporate a shock and try to understand how that shock propagates throughout the economy. “We now know that there is much more to it than that,” he said. “We need models that help us understand what it is that creates economic fragility. The question is, what metrics and tools do we need to reduce the frequency and severity of crises?”

Historically there was a “division of labor” across policy realms, Cecchetti said. Regulatory policy focused mostly on stemming contagion from the failure of individual institutions. Fiscal policy focused on long-term economic growth and the allocation of government resources. Monetary policy aimed to achieve short-term stabilization.

But the world was never that simple, and each of these realms influences the others. “It’s all connected,” he said. “How can we integrate all these into a new policy framework that captures all these interactions?”

At the Federal Reserve Board, financial regulatory policy is focused on identifying vulnerabilities across the economy. “The focus is not so much shocks because those are hard to predict, varied, and inevitable. The concern is on vulnerabilities that amplify the shocks,” said Nellie Liang, director of the Fed’s Office of Financial Stability Policy and Research.

Rather than identifying bubbles, the policy focus is on the spread of losses if the bubble bursts. “That raises the question of what tools we use. What’s the purpose? To spot bubbles from building or building resiliency?” Lang noted.

Laura Kodres, Division Chief for the Global Financial Stability Division at the International Monetary Fund, also noted that policy research was now focusing beyond vulnerabilities to the next stage: “What are we going to do about them? Which models are best for early warning? Which are best for identifying when we are in a crisis? Which are best for clean-up?”

Improving resilience across all sectors of the economy is one key perspective for macroprudential policy said Frank Smets, Director General of the Directorate General Research at the European Central Bank. The second important perspective is managing the procyclical nature of finance. The challenge, he said, is linking the two.

“Why and when do credit booms go hand in hand with greater leverage, liquidity mismatch, and interconnectedness? Why and under what circumstances are credit booms associated with more risk-taking?” he asked. “Once we understand these questions, we will have a better idea of how to manage it and lean against the cycle.”

Smets said the ECB has created a research network, called MaRs, to support financial stability analysis and policies. One stream of its efforts mirrors the MFM project, working to incorporate widespread financial instability into macroeconomic models. The other streams of work focus on indicators, to provide early warnings of financial threats, and examine the potential of contagion across markets and countries.

Kodres pointed out that early-warning indicators are not really early. Most are price-based, and thus reveal a problem when it’s already a problem. Policymakers typically need weeks to evaluate a situation and respond, so signals that appear concurrent with the issue are not all that helpful.

Lang stressed that policies should be preemptive, but also targeted and limited. “The whole point of macroprudential policy is to raise the cost of financial intermediation. You have to decide how much you are willing to raise it when times are good and shocks are low,” she said. “You have to make some judgments about what’s acceptable, and how costly should we make this. What are we willing to pay now for the potential of reducing crises in the future?”

Access to data across countries and institutions is a major challenge for policymakers; another is standardizing and protecting confidentiality of the data that is available. Smets said the ECB is working toward a single supervisory mechanism, bringing macro- and microprudential policy for euro area nations under one roof. This will help create information synergies and ease the data burdens.

The policymakers offered a few suggestions for academic researchers developing models:

  • Cecchetti urged scholars to be more globally focused and build models that look beyond U.S. experience.
  • Kodres called for incorporating policymakers’ preferences into models. “There are very different preferences around the globe for when a policymaker decides to act. Allan Greenspan didn’t want to take the punch bowl away too early. The Australians have a much more hands-on approach to controlling bank risk taking.  None of models [presented at the meeting] had the preferences of policymakers incorporated,” she said.
  • More work is needed to calibrate credit growth with asset price bubbles and to incorporate the liability structure of financial intermediaries in macro models.
  • We need a better understanding of debt “Why is debt such a prevalent form of financing?” Cecchetti asked. “As John Cochrane pointed out today, debt is heavily subsidized. But if we reduced the subsidy, would the level of debt really fall?”
  • We also need to understand how debt and default ripple through the economy. Financial frictions in most models are smooth. Default is not; it’s a shock and we don’t understand how default affects behavior.

The panelists noted that they’d learned from the academic research presented throughout the meeting, and looked forward to continued engagement and collaboration. “Academics are doing us a service by trying to think through issues: what are the statistics we need; what models could we use to analyze counterparty exposures; what are the elasticities we need to know?” Kodres said. She encouraged graduate students to work on these problems, and reminded scholars, “We give away our data for free.”

Cecchetti said he hoped his comments gave the MFM researchers a sense of the questions that are on the minds of policymakers today. “It’s valuable to have people from macroeconomics and finance coming together to figure out answers to these questions. This is a terrific project, and I hope it continues. You are all very much on the right track.”