Key Takeaways
  • Families with college-bound children often go to great lengths to ensure admission into a “good” school.
  • The authors parse a unique and rich dataset to assess the contributions that schools make to student outcomes.
  • In the main, differences in average outcomes across colleges are largely driven by differences in incoming students, rather than large causal impacts of the colleges themselves—at least across the actual choices students face within their admission sets.
  • However, instructional expenditures per student, the faculty-student ratio, and the number of faculty on the tenure track do better at predicting which colleges durably boost the outcomes of their students.

In their paper, “The Returns to College(s): Relative Value-Added and Match Effects in Higher Education,” Jack Mountjoy of UChicago’s Booth School of Business and Brent R. Hickman of Washington University’s Olin School of Business parse a unique and rich dataset to address this question. Their answer is that while the vast majority of a school’s average outcomes are determined by the pre-existing characteristics of its students, some schools really do boost student outcomes more than others. Further, the institutional features that predict such value-added, like academic expenditures per pupil, can inform policymakers charged with setting budgets and establishing goals for higher education.

The cream will rise

Mountjoy and Hickman confront the problem of comparing different colleges—which attract student bodies with widely different academic preparation and socioeconomic backgrounds—by employing data from detailed administrative registries that span the state of Texas. In so doing, the authors address three important challenges: 1) the need for large amounts of data to provide precise college-level estimates; 2) the need for individual-linked data measuring key student outcomes like degree completion, major choices, and earnings; and 3) the need to address self-sorting by students in terms of their college choices, as well as admission decisions by colleges. 

Chart
Figure 1 • Baseline Value-Added Estimates and Comparison to Other Common Approaches
Notes: Each set of point estimates and robust 95% confidence intervals come from regressions of individual student outcomes on college treatment indicators, omitting UT-Austin as the reference treatment (signified by the vertical line at zero). UT-Austin’s outcome mean for BA completion rate is 82% and $55,975 for earnings. All specifications control for cohort fixed effects. The Raw Means specification controls for nothing else. The Typical Controls specification adds controls for demographics (gender, race, FRPL), high school academic preparation (10th grade test scores, advanced coursework, and top high school GPA decile indicator), and behavioral measures of non-cognitive skills (high-school attendance, disciplinary infractions, and an indicator for ever being at risk of dropping out). The Baseline Specification controls solely for college admission portfolio fixed effects (and cohort fixed effects). See Appendix Tables B.1 and B.2 for the corresponding numerical estimates.

The authors employ a unique feature of the Texas administrative data—application and admission records at all Texas public universities—to compare the outcomes of students who applied to, and are admitted by, the same set of colleges. The insight here is that a student’s decisions about where to apply, and those colleges’ admittance decisions, reveal important information about a student’s abilities, ambitions, and other unobserved advantages that may not be fully captured by more typically observed variables like test scores and demographics. If a student’s portfolio of applications and admissions is a valid proxy for these unobservable factors, such that the remaining variation in student potential is uncorrelated with where students ultimately enroll, then comparing student outcomes within application and admission portfolios identifies the relative value-added of attending different colleges. Indeed, the authors find that students admitted to the same set of schools, but who end up making different enrollment choices, are strikingly similar in terms of their high school academic preparation, demographics, family income, and other predictors of longer-run outcomes, facilitating apples-to-apples comparisons of students who attend different colleges.

The insight here is that a student’s decisions about where to apply, and those colleges’ admittance decisions, reveal important information about a student’s abilities, ambitions, and other unobserved advantages that may not be fully captured by more typically observed variables like test scores and demographics.

With that summary of the authors’ methodology, we now turn to four of their key findings:

  • In the main, differences in average outcomes across colleges are largely driven by differences in incoming students, rather than large causal impacts of the colleges themselves—at least across the actual choices students face within their admission sets.
  • Even so, causal differences in value-added across colleges are not zero, with some meaningful differences among schools. What accounts for these causal differences? Selectivity, as measured by the average SAT score of a college’s incoming students, is only a weak predictor of value-added, with a fleeting earnings premium to attending a more selective college that fades out after a few years in the labor market. Other institutional inputs under the control of colleges, like instructional expenditures per student, the faculty-student ratio, and the number of faculty on the tenure track, do better at predicting which colleges durably boost the outcomes of their students.
  • These causal differences among schools are reflected in several different student outcomes. Ten years after entering college, students who attend a college with one standard deviation higher value-added will earn, on average, about $1,300 more per year, roughly a 3% premium. These earnings-boosting colleges also tend to be the schools that are boosting BA completion relative to their competitors. A similar pattern holds for schools that produce more graduates in STEM fields, which correlates with higher earnings. 
  • Because of their rich dataset, the authors are also able to explore whether these causal differences across colleges vary across students from different backgrounds. Stratifying their sample by gender, race, family income, and high school ability measures reveals little difference across groups in the overall pattern of value-added across colleges.  

Conclusion

Student outcomes vary enormously across universities in the United States. These differences attenuate dramatically, however, when comparing the outcomes of students who applied to and gained admission to the same set of schools. This suggests that value-added differences across colleges play a smaller role than student sorting in producing the large outcome disparities highlighted in college guides, the popular press, and even in state funding formulas for higher education. 

CLOSING TAKEAWAY
Value-added differences across colleges play a smaller role than student sorting in producing the large outcome disparities highlighted in college guides, the popular press, and even in state funding formulas for higher education.

However, that does not mean that colleges do not—and cannot—distinguish themselves in terms of causal effectiveness. Importantly, Mountjoy and Hickman find that colleges with higher per student instructional spending, lower student-faculty ratios, and more tenure-track faculty generate higher incomes for their students, on average. Those schools also retain and graduate a larger percentage of their students, including in challenging but lucrative STEM fields. 

While the authors note that their analysis sample is ultimately limited to public universities in Texas, their incorporation of several large and unique administrative datasets reveals several notable patterns about college value-added across a wide diversity of institutions. Their findings that causal value-added can vary meaningfully across colleges suggests that policymakers who appropriate money to postsecondary institutions should pay attention to what high-value-added schools are doing, like spending more on per-pupil instruction and employing a greater share of full-time, tenure-track faculty. Focusing only on measures of college “quality” like selectivity or raw graduation rates, for example, mostly reflects the quality of students a college happens to enroll, rather than institutional effectiveness at actually boosting those students’ outcomes.

More on this topic

Podcast May 14, 2024

Is College Worth It? Measuring the Returns to Higher Education

Tess Vigeland and Jack Mountjoy
College graduates earn more than those who didn’t attend college. Does this mean higher education boosts your income? Or, does college simply attract students who would’ve earned more anyway? Jack Mountjoy, an economist at the University of Chicago Booth School...
Topics: Higher Education & Workforce Training
Research Briefs·Mar 20, 2024

Experimental Estimates of College Coaching on Postsecondary Re-enrollment

Lesley Turner and Oded Gurantz
Among former students who left college prior to earning a degree, eligibility to receive one-on-one college coaching does not increase college reenrollment.
Topics: Higher Education & Workforce Training
Research Briefs·Feb 21, 2024

English Language Requirement and Educational Inequality: Evidence from 16 Million College Applicants in China

Hongbin Li, Lingsheng Meng, Kai Mu and Shaoda Wang
The introduction of English listening tests in China’s National College Entrance Exam significantly lowered rural students’ exam scores, college access, and future earnings compared to their urban peers.
Topics: Higher Education & Workforce Training