If you have ever been through the process of applying to colleges, you have almost certainly heard the term “selective colleges.” If you haven’t the basic idea is that some colleges are harder to get into, for example as measured by what percentage of applicants are accepted to the school. The assumption of both applicants and schools is that a more selective college is “better” in some sense than a less selective college. But is it?
In a new working paper, Mountjoy and Hickman explore this question in great detail. The short version of their answer: selective colleges don’t seem to matter much, as measured by either completion rates or earnings in the labor market. That’s an interesting result in itself, but understanding how they get to this result is also interesting and an excellent example of how to do social science correctly.
Here’s the problem: when you just look at outcomes such as graduation rates or earnings, selective colleges seem to do better. But most college freshmen could immediately identify the problem with this result: that’s correlation, not causation (and importantly, they probably knew this before stepping onto a college campus). Students that go to more selective colleges have higher abilities, whether as measured by SAT scores or by other traits such as perseverance. It’s a classic selection bias problem. How much value is the college really adding?
Here’s how this paper addresses the problem: by only looking at students that apply to and are accepted to colleges with different selectivity levels, but some choose to go to the less selective colleges. What if we only compare this students (and of course, control for measurable differences in ability)?
Now this approach is not a perfect experiment. Students are not randomly assigned to different colleges. There is still some choice going on. But are the students who choose to attend a less selective college different in some way? The authors try to convince us in a number of ways that they are not really that different. Here’s one thing they point out: “nearly half of the students in our identifying sample choose a less selective college than their most selective option, suggesting this identifying variation is not merely an odd choice confined to a small faction of quirky students.”
Perhaps that alone doesn’t convince you, but let’s proceed for now to the results. This chart on post-college earnings nicely summarizes the results (see Figure 3 in the paper, which also has a very similar chart for completion rates)
Here’s how to read the chart: the colleges are ordered from the most selective at the top (Texas A&M) to the least selective at the bottom (Texas Southern). UT-Austin is actually the most-selective school in the data set, and all values in the data are relative to UT-Austin. Plotted in blue are the raw means, the results if you just look at outcomes. UT-Austin graduates have mean earnings of about $56,000 a decade or so after graduation, whereas many schools have earnings that are $20,000 less or more. That’s a huge earnings gap!
What happens if we just “control” for observable characteristics of students, such as SAT scores? The gap shrinks, but is not eliminated. Those results are plotted in red. Here we see many schools where the earnings are about $10,000 less than UT-Austin. Again, this is still a big gap when you add it up over a lifetime! Some social scientists might stop here, but we haven’t actually solved the selection bias problem just by controlling for observables.
In the subsample of students that were accepted to more than 1 school, but some choose to go to less selective ones, the earnings differences almost completely vanish. These results are plotted in green, and they are astonishing (if you think that selectivity matters). Here’s how to interpret the chart: a student with similar measurable abilities that was accepted to both UT-Austin (the most selective) and Texas Southern (the less selective) will have the same earnings 10 years after graduation regardless of which school they choose. It might even be slightly higher if they go to Texas Southern.
Does this mean that college doesn’t add any value at all? No. This paper does not address that question. They aren’t comparing students that go to college with those that don’t. There is likely still a college wage premium, but this result suggests that which college you go to (at least among public colleges in Texas) matters less than that you go to college and graduate. College majors matter too, and college major offerings vary across colleges, but that’s one characteristic they control for.
Finally, we should mention how they are measuring how selective a college. A standard measure is acceptance rates. But colleges often try to “game” this measure, because both applicants and college rankings often use this as a measure of selectivity. Instead, they just use the average SAT score of students at the college. This isn’t perfect, but it’s probably the best measure we have which can’t be “gamed” by schools.