BFI NewsOct 12, 2016

Computation and the Classroom

Economists reflect on how to best prepare their students for a computationally-driven future.

Economics students likely hear a thousand times from as many voices in the economics profession, from Alvin Roth and Susan Athey to Felix Kubler and Richard Evans: learn to code, and the earlier the better. There’s no question that computation could fundamentally change the ways that many economic questions are considered in the decades to come, and students may be wise to learn how to write code that takes advantage of that changing landscape. But what specifically will what they learn and how they learn it change? We asked economists how they’re preparing their students to thrive in the age of economic computing.

Economists may be a little behind the other sciences when it comes to working computationally. But that’s an opportunity, not a problem.

In broad strokes, understanding how to harness computation to solve models that can’t be solved on paper allows for models that are more complex, capturing more heterogeneous parameters. “Right now, most of the work is advancing our data capabilities. It’s allowing us to chew through bigger data sets, manipulate better data sets,” says Richard Evans, fellow at the institute.

Because economics as a discipline has been more careful in adopting computational methods, many see an opportunity to benefit from the experience of other fields to see what works and what doesn’t. “It’s useful to keep your eye on what computer scientists are doing,” says Ben Moll, an assistant professor of economics at Princeton University. Economics has a similar relationship with fields like psychology, says Moll—new approaches for considering economic questions are often borrowed from experts in other fields.

Greg Kaplan, a professor in the UChicago Economics Department, puts it in blunter terms: Computer science, like all fields, has a different “weapons cabinet” for attacking problems. He advises his students to borrow from different professions, but to stay focused on the economic problem at hand, “getting your hands dirty with new methods along the way.”

You don’t have to know how the hardware works, but you will need to know how it’s changing.

“Computation” covers a broad spectrum of tools and methods that use them, ranging from powerful multi-core personal computers to entire clusters of servers in the cloud. The next generation of economists won’t necessarily need to design next-generation computer hardware but will have to follow trends in hardware development more closely in order to stay on the leading edge of what their models can do. ”We’re done with increases in clock speed,” says Felix Kübler, Professor of Financial Economics at University of Zurich. As we approach having desktops with 50 or more cores, everyone, even in building models for a single computer, will need to know which methods work better in parallel—that is, split across many less-powerful processors—compared to others. “Parallel computing will change seriously the way we think about solving certain problems.” Tracing the thinking behind those trends—why certain methods work in parallel and others do not, for instance—will be the key skill set that young economists need to embrace.

It means there’s a sharper learning curve ahead, because computational tools don’t always come with instructions specific to economists. MatLab may give way to Python. Students may have to spend some time learning the basics of Git and Unix system commands. This kind of overhead gets harder as you get older, says Kaplan, but it will be well worth the basic investment now.

That said, in-training economists have other basics to learn, and some of the most important work will come in developing friendlier tools to harness computation. Karl Schmedders, Professor of Quantitative Business Administration at the University of Zurich, says that it will be important to make certain tasks easier for economists in the future. How can we reduce setup cost for economists, making it easier to jump from running a model on laptop to scaling it across a cloud-based cluster?

It’s possible that entire careers could be made on verifying that the approximations within computational models are good ones.

The Becker Friedman Institute’s recent machine learning conference focused on probing the underlying assumptions within one kind of computational model, and economists will continue to provide immense value by testing the statistical rigor of similar models as they arise from different disciplines. “This is a valid research question in and of itself,” says Kübler. “I’ve spent much of my [own] career on it.”

Kübler says that key to getting economists to adopt computational techniques will be the work of vetting methods in an economic context. Those concrete applications will require a marriage of traditional economic theory work and the know-how to understand why computational models are specified a certain way. Someone needs to be asking “what is the essence of this model?”, a question Schmedders thinks economists might be uniquely patient enough to ask, to their own benefit. “We can adapt [CS] methods to enhance our own.”

“There’s this beautiful back and forth between theory and computation,” says Evans. “One thing that keeps coming up, both in my own work and anecdotally from these other economists that I interact with, is that computation can actually help us create better theoretical models or learn what things we can prove analytically about our theory models, as well as the theory indicating how to compute the solutions to our models.”

Sharing code nicely with others could be more important in the future.

Posting your source code online bears some risk in economics, since the incentives for putting all the code powering your model online are unclear. It won’t help you get published, cited, a job or tenured, according to Kübler and Schmedders. But by publishing source code via platforms like Github, economists do force themselves to write their code in such a way that it becomes more readable to others, something that has long term potential for increasing collaboration on common problems and speeding the adoption of new methods.

“We’re incredibly inefficient as a profession,” says Schmedders. “Too much reinventing the wheel.”

Ben Moll acknowledges the risks of open sourcing your work, but maintains that the practice has real value for his colleagues and students. “I think it’s a good idea to share everything you can,” he says. You get little credit for it, but he says it’s the only way to get people to use these methods, by working to make them user-friendly.

Evans sees the issue of credit for work changing as economists become more comfortable working with collaboration workflows. “As an academic, I get a lot of credit for the papers that I publish. I think it would be really valuable if we also get credit for the code that we contribute to and improve and GitHub is a really strong tool for doing that,” says Evans.

Moreover, if trends in research publications continue, publishing your code for replicability could become increasingly important to getting work published. That’s a compelling reason for economics students to consider a more open source ethos when it comes to the code running their models—just to keep everyone honest.

“In the worst-case scenario, you don’t want people faking their results so you want to be able to replicate it,” says Evans. “But in the best-case scenario, we make honest mistakes. There can be errors in our code that we don’t know about and we do our best through the referee process to check everything.”

Likewise, thinking about the best way to communicate and visualize your work beyond how it shows up in the paper will be increasingly important.

Economics students aren’t also expected to become front-end web developers too but, particularly for those hoping to make a policy impact with their work, insights from a paper will also need to be visualized ways that can be grasped by a non-academic audience. That will involve collaboration with communicators and user-focused web developers to translate work from models into new methods of data visualization.

Greg Kaplan has a simple suggestion—economics students should take a journalism class or two. Coding or writing, it’s all work related to communicating their work effectively.

Evans turns that notion beyond academia to civic duty, courtesy of his work with the Open Source Policy Center, an American Enterprise Institute project focused on “making policy analysis more transparent, trustworthy, and collaborative by supporting open-source projects that build cutting-edge economic models.”

“One thing we’re working on is not only for academic research making our work open-source but creating economic models for evaluating fiscal policy that are open-source. People can look at all the assumptions. People can take those models and change the assumptions,” says Evans. The project he’s focused on is called TaxBrain, a project under the Open Source Policy Center. “We’re building web applications so that non-experts can manipulate these models and run their own tests of what would be the effect of a certain policy.”

Evans’ aspirations for the democratizing effects of open source computing for economics are fitting, since the broader goal of integrating computation into economic study is of a similar unifying mission. “We’re trying to bring people from different fields together,” says Evans. “In my work, it’s mathematicians and computer scientists and policy analysts and economists but that circle can get a lot more broad.”

—Mark Riechers