Algorithms guide an increasingly large number of high-stakes decisions, including criminal risk assessment, resume screening, and medical testing. While such data-based decision-making may appear unbiased, there is increasing concern that it can entrench or worsen discrimination against legally protected groups. With algorithmic recommendations for pretrial release decisions, for example, a risk assessment tool may be viewed as racially discriminatory if it recommends white defendants be released before trial at a higher rate than Black defendants with equal risk of pretrial criminal misconduct.

How is it that discrimination can occur through logical, unfeeling, algorithms? The answer is in the data that feed the algorithms. Continuing with the pretrial release example, misconduct potential is only observed among the defendants who a judge chooses to release before trial. Such selection can introduce bias in algorithmic predictions but also complicate the measurement of algorithmic discrimination, since unobserved qualification cannot be conditioned on to compare white and Black treatment.

This paper develops new tools to overcome this selection challenge and measure algorithmic discrimination in New York City (NYC), home to one of the largest pretrial systems in the country. The method builds on previous techniques developed by the author to measure racial discrimination in actual bail judge decisions and leverages randomness in the assignment of judges to white and Black defendants. Applying their methods, the authors find that a sophisticated machine learning algorithm (which does not train directly on defendant race or ethnicity) recommends the release of white defendants at a significantly higher rate than Black defendants with identical pretrial misconduct potential.

Specifically, when calibrated to the average NYC release rate of 73 percent, the algorithm recommends an 8-percentage point (11 percent) higher release rate for white defendants than equally qualified Black defendants. This unwarranted disparity explains 77 percent of the observed racial disparity in release recommendations, grows as the algorithm becomes more lenient, and is driven by discrimination among individuals who would engage in pretrial misconduct if released.

More on this topic

Research Briefs·Sep 4, 2024

What Middle-Income Countries Can Learn from America’s Innovation System

Somik Lall and Ufuk Akcigit
The American model of innovation has long been the envy of the world. From the garage tinkerers of Silicon Valley to the research labs of prestigious universities, the United States has consistently churned out groundbreaking technologies that have reshaped industries...
Topics: Technology & Innovation
Podcasts episode·Jul 3, 2024

Using Cellphone Data to Observe Religious Worship in the United States

Tess Vigeland and Devin Pope
Using Cellphone Data to Observe Religious Worship in the United States What do location data from roughly 2.1 million cellphones say about religiosity in the United States? In this episode of The Pie, Devin Pope, Professor of Economics and Behavioral...
Topics: Technology & Innovation
Research Briefs·May 17, 2024

Gaining Steam: Incumbent Lock-in and Entrant Leapfrogging

Anders Humlum and Richard Hornbeck
The adoption of new technologies can be slowed if companies become locked into alternatives that are cheaper at the outset. During the mid 1800s, small mills used waterpower because of its low fixed costs; their failure to switch to steam...
Topics: Technology & Innovation