Key Takeaways
  • Families with college-bound children often go to great lengths to ensure admission into a “good” school.
  • The authors parse a unique and rich dataset to assess the contributions that schools make to student outcomes.
  • In the main, differences in average outcomes across colleges are largely driven by differences in incoming students, rather than large causal impacts of the colleges themselves—at least across the actual choices students face within their admission sets.
  • However, instructional expenditures per student, the faculty-student ratio, and the number of faculty on the tenure track do better at predicting which colleges durably boost the outcomes of their students.

In their paper, “The Returns to College(s): Relative Value-Added and Match Effects in Higher Education,” Jack Mountjoy of UChicago’s Booth School of Business and Brent R. Hickman of Washington University’s Olin School of Business parse a unique and rich dataset to address this question. Their answer is that while the vast majority of a school’s average outcomes are determined by the pre-existing characteristics of its students, some schools really do boost student outcomes more than others. Further, the institutional features that predict such value-added, like academic expenditures per pupil, can inform policymakers charged with setting budgets and establishing goals for higher education.

The cream will rise

Mountjoy and Hickman confront the problem of comparing different colleges—which attract student bodies with widely different academic preparation and socioeconomic backgrounds—by employing data from detailed administrative registries that span the state of Texas. In so doing, the authors address three important challenges: 1) the need for large amounts of data to provide precise college-level estimates; 2) the need for individual-linked data measuring key student outcomes like degree completion, major choices, and earnings; and 3) the need to address self-sorting by students in terms of their college choices, as well as admission decisions by colleges. 

Chart
Figure 1 • Baseline Value-Added Estimates and Comparison to Other Common Approaches
Notes: Each set of point estimates and robust 95% confidence intervals come from regressions of individual student outcomes on college treatment indicators, omitting UT-Austin as the reference treatment (signified by the vertical line at zero). UT-Austin’s outcome mean for BA completion rate is 82% and $55,975 for earnings. All specifications control for cohort fixed effects. The Raw Means specification controls for nothing else. The Typical Controls specification adds controls for demographics (gender, race, FRPL), high school academic preparation (10th grade test scores, advanced coursework, and top high school GPA decile indicator), and behavioral measures of non-cognitive skills (high-school attendance, disciplinary infractions, and an indicator for ever being at risk of dropping out). The Baseline Specification controls solely for college admission portfolio fixed effects (and cohort fixed effects). See Appendix Tables B.1 and B.2 for the corresponding numerical estimates.

The authors employ a unique feature of the Texas administrative data—application and admission records at all Texas public universities—to compare the outcomes of students who applied to, and are admitted by, the same set of colleges. The insight here is that a student’s decisions about where to apply, and those colleges’ admittance decisions, reveal important information about a student’s abilities, ambitions, and other unobserved advantages that may not be fully captured by more typically observed variables like test scores and demographics. If a student’s portfolio of applications and admissions is a valid proxy for these unobservable factors, such that the remaining variation in student potential is uncorrelated with where students ultimately enroll, then comparing student outcomes within application and admission portfolios identifies the relative value-added of attending different colleges. Indeed, the authors find that students admitted to the same set of schools, but who end up making different enrollment choices, are strikingly similar in terms of their high school academic preparation, demographics, family income, and other predictors of longer-run outcomes, facilitating apples-to-apples comparisons of students who attend different colleges.

The insight here is that a student’s decisions about where to apply, and those colleges’ admittance decisions, reveal important information about a student’s abilities, ambitions, and other unobserved advantages that may not be fully captured by more typically observed variables like test scores and demographics.

With that summary of the authors’ methodology, we now turn to four of their key findings:

  • In the main, differences in average outcomes across colleges are largely driven by differences in incoming students, rather than large causal impacts of the colleges themselves—at least across the actual choices students face within their admission sets.
  • Even so, causal differences in value-added across colleges are not zero, with some meaningful differences among schools. What accounts for these causal differences? Selectivity, as measured by the average SAT score of a college’s incoming students, is only a weak predictor of value-added, with a fleeting earnings premium to attending a more selective college that fades out after a few years in the labor market. Other institutional inputs under the control of colleges, like instructional expenditures per student, the faculty-student ratio, and the number of faculty on the tenure track, do better at predicting which colleges durably boost the outcomes of their students.
  • These causal differences among schools are reflected in several different student outcomes. Ten years after entering college, students who attend a college with one standard deviation higher value-added will earn, on average, about $1,300 more per year, roughly a 3% premium. These earnings-boosting colleges also tend to be the schools that are boosting BA completion relative to their competitors. A similar pattern holds for schools that produce more graduates in STEM fields, which correlates with higher earnings. 
  • Because of their rich dataset, the authors are also able to explore whether these causal differences across colleges vary across students from different backgrounds. Stratifying their sample by gender, race, family income, and high school ability measures reveals little difference across groups in the overall pattern of value-added across colleges.  

Conclusion

Student outcomes vary enormously across universities in the United States. These differences attenuate dramatically, however, when comparing the outcomes of students who applied to and gained admission to the same set of schools. This suggests that value-added differences across colleges play a smaller role than student sorting in producing the large outcome disparities highlighted in college guides, the popular press, and even in state funding formulas for higher education. 

CLOSING TAKEAWAY
Value-added differences across colleges play a smaller role than student sorting in producing the large outcome disparities highlighted in college guides, the popular press, and even in state funding formulas for higher education.

However, that does not mean that colleges do not—and cannot—distinguish themselves in terms of causal effectiveness. Importantly, Mountjoy and Hickman find that colleges with higher per student instructional spending, lower student-faculty ratios, and more tenure-track faculty generate higher incomes for their students, on average. Those schools also retain and graduate a larger percentage of their students, including in challenging but lucrative STEM fields. 

While the authors note that their analysis sample is ultimately limited to public universities in Texas, their incorporation of several large and unique administrative datasets reveals several notable patterns about college value-added across a wide diversity of institutions. Their findings that causal value-added can vary meaningfully across colleges suggests that policymakers who appropriate money to postsecondary institutions should pay attention to what high-value-added schools are doing, like spending more on per-pupil instruction and employing a greater share of full-time, tenure-track faculty. Focusing only on measures of college “quality” like selectivity or raw graduation rates, for example, mostly reflects the quality of students a college happens to enroll, rather than institutional effectiveness at actually boosting those students’ outcomes.

More on this topic

Research Briefs·Aug 6, 2024

The Graduation Part II: Graduate School Graduation Rates

Jeffrey T. Denning and Lesley J. Turner
Graduation rates from graduate programs increased from 58% for students entering in 2003-04 to 68% for students entering in 2012-13, with substantial variation across fields of study and institutions.
Topics: Higher Education & Workforce Training
Podcast Jul 23, 2024

What Went Wrong With Federal Student Loans?

The United States is in the midst of a student loan crisis, with over 45 million borrowers owing more than $1.6 trillion in federal dollars. On this episode of The Pie, Constantine Yannelis, Associate Professor of Finance at Chicago Booth, argues that...
Topics: Higher Education & Workforce Training
Research Briefs·Jun 4, 2024

What Went Wrong with Federal Student Loans?

Constantine Yannelis and Adam Looney
The increase in aggregate student debt and the struggles of today’s borrowers can be traced to changes in federal policies intended to broaden access to educational opportunities, which increased enrollment and borrowing in higher-risk circumstances.
Topics: Higher Education & Workforce Training