When we read about how government agencies spend their money, it often seems absurd that entities with millions and billions at their fingertips can’t figure out how much it costs to get their work done. But on closer inspection, it is clear that these organizations try to get it right; it’s just that it might not be possible.
“Regulators like the EPA use cost-benefit analysis, but they don’t do a very good job,” noted Eric Posner of the University of Chicago Law School. “They say, ‘here are a bunch of benefits that this regulation will produce, we can quantify one or two, but we can’t quantify the rest of them. Nonetheless, we think there the other ones are quite significant and therefore we are going to go through with the regulation.’”
In cases where benefits haven’t been quantified, the agency will often says that the regulation has passed the cost-benefit analysis (CBA). Unfortunately, without proper quantification, it may turn out that the benefit is too weak or it that the agency is over regulating and quashing any benefits.
At Federal Agency Decision Making Under Deep Uncertainty, Posner, the Kirkland and Ellis Distinguished Service Professor of Law, and Jonathan Masur discussed their research on how agencies deal with cost-benefit analyses and presented a common sense approach for better analysis in the future in the future.
As a rule, agencies are very rigorous about costs, but attaching a price to benefits is far more complicated. According to Posner and Masur, agencies offer one of two explanations for why they don’t quantify gains. The first is that usually have no idea what the causal effect of the regulation will be or the money value of the benefit. For example, in the case of school lunches, we know it is healthier for children to eat apples rather than potato chips, but we don’t know how much better or how to value that.
The other explanation is that quantification is simply not possible in principle. Such is the case with the expanded definition of disability, which created benefits for people of various types and was passed to confer dignity. But dignity is not something that can be measured.
“The solution to such cases is that we should think of CBA in Bayesian terms or in terms of subjective probability. Rather than just throwing up their hands when not much empirical evidence of the effect on human well-being is available, they should just guess,” Posner explained.
He and Masur argue that agencies should simply write down their best educated guesses, and admit that these are estimates rather than genuine cost-benefit analyses. This will allow other people to give thoughts and opinions on what goes into the CBA. They also recommend that agencies revisit their guesses and do studies with hindsight to update the information they are working with. “If the regulator has to guess, with limited information, they should provide retroactive analysis of the effect of the regulation.” Posner concluded.
In response to their research and presentation, Dan Farber, Sho Sato Professor of Law at the University of California, Berkeley, told the audience that Posner and Masur inspired him to find out how expensive using CBA is. He discovered a 1997 CBO report that puts the cost of drafting regulatory impact analysis at $720,000 in 2015 dollars. He also found that at a 5 percent discount rate, the average social cost of a one-year delay is $200 million, while most RIAs take three years to complete. “But that does not include overlaps with other reviews and informal interactions; a half year is plausible, and formal delay seems to be 90 days. On the other hand, it could be a year or it could be three months.” Overall, Faber said, costs of the CBA process are “$100 million, not to mention lost social benefits.”
Faber added that these delays also create opportunity costs, because money spent on one thing can’t be spent on something else. He added that if one fewer regulation per year across the government is passed it could add up to $4 billion in social costs. Despite this expenditure, there has not been much evidence found that the CBA process has reduced actual regulatory costs. “But that doesn’t mean it could be if administered properly,” Faber added.
Victor Gillinsky, a former commissioner of the Nuclear Regulatory Commission, spoke next and clearly expressed his sympathy for Posner and Masur’s predilection for quantification. He also stated his belief that cost-benefit analyses are important because they provide a framework for everyone to participate with a set of agreed-upon rules and effective opportunities for criticism of agency decisions.
He illustrated his point with thoughts about large nuclear accidents. The NRC, he noted, makes decisions under extreme uncertainty and embraces CBA as a way to rationalize their actions. But most important nuclear accidents are the large events that are generally regarded as highly improbable. Consequently, he noted, they come up with probabilities by elaborate computer programs and make many assumptions about systems. Plus, these calculations leave out things like sabotage because they are difficult to calculate.
“The thing I find most troubling is the metric of probability vs. consequences. It is one thing to use this when you have a lot of statistics, like at a casino. But when you are looking at a situation that could have devastating results, most people take a risk averse approach,” Gillinsky explained. “I find it interesting that the nuclear suppliers like GE will not supply equipment to any plant in the world unless they have total immunity to accident consequences.”
All the panelists agree that cost-benefit analyses are valuable and should be used, but when numbers are soft and can be fudged to get a certain result, both the agency and the public need to be aware of that. Still, if agencies take the time to revisit their numbers and adjust them as regulation is enacted, regulation may become more efficient over time.
-by Robin Mordfin