How Colleges Can Admit Better Students

(New York Times) by Devin Pope.

  As colleges nationwide prepare to announce this month which applicants they have decided to accept, it’s worth asking why so many admissions offices pass up easy opportunities to admit higher-quality students.

Nearly all colleges, for example, make use of two metrics to gauge student quality: cumulative high school grade point average and composite score on the ACT (the most widely taken college admissions exam). But research has shown that these metrics are imperfect: They are less predictive of student success than alternative measures that are equally simple to calculate and whose use would lead to a better incoming class.

Consider grade point average. Students whose overall G.P.A. is a result of doing better later in high school (say, junior and senior years) are much more likely to succeed in college than students with the same overall G.P.A. who did better early in high school (say, freshman and sophomore years).

paper in The Journal of Public Economics by the economist George Bulman provides evidence for this claim, using data from Florida. He shows that an additional G.P.A. point in 11th grade makes a student 16 percentage points more likely to graduate from college, whereas an additional G.P.A. point in ninth grade makes a student only five percentage points more likely to graduate from college. Later high school G.P.A. is approximately five times more predictive of whether a student drops out of college within two years, and two times more predictive of eventual labor market earnings.

Something similar is true of ACT composite scores. These are the rounded average of scores on four individual sections of the ACT (math, English, reading and science). By using the ACT composite score, college admissions offices are giving equal weight to each of the four subtests. But in a 2013 paper I wrote with the education researchers Eric Bettinger and Brent Evans, using data from public college students in Ohio, we provide evidence that the math and English subject tests are far more predictive of college success than the reading and science tests.

For example, a student who achieves an ACT composite score of 24 by getting a 26 on the reading and science tests and a 22 on the math and English tests is 10.4 percentage points more likely to drop out of college by the third year than a student who achieved a composite score of 24 in the opposite manner.

Don’t get me wrong: Cumulative high school G.P.A. and ACT composite scores do correlate with a student’s success in college; there is no great harm in having admissions offices use them when considering student applications. But there is also no point in using them when these better metrics are just as easily available.

So why are colleges sticking to the old approach?

Admissions officers may be worried that reweighting high school G.P.A.s or ACT scores will affect the diversity of the student body they admit. Both the papers discussed above, however, find reweighting does not adversely affect minority students (if anything, it helps them).

Colleges may also be reluctant to adopt these more predictive metrics because popular college rankings, such as those produced by U.S. News & World Report, use the old metrics in their calculations. While it is possible that schools using the more predictive metrics would see a small initial drop in rank (because of the mistaken appearance of admitting a student body of lower quality), most schools would more than make up for this drop by improving their graduation rate four years later. (Ideally, U.S. News & World Report and others would adjust their methodology to reflect the most recent research on what predicts student success.)

Admissions officers may also lack the proper incentives or feedback necessary for change. Whether or not a student does well in college is not something you can typically determine until a few years after the admissions decision, and thus admissions officers may not feel that they are blamed or rewarded for student success. University officials need to actively encourage admissions offices to take a long-term perspective.

The two examples above — G.P.A. and ACT — provide just a glimpse into the growing field of data and analysis relating to college admissions. Some colleges, such as West Virginia University and Houston Baptist University, are already using sophisticated statistical methods to predict which students are most likely, if accepted, to matriculate and, therefore, where recruiting efforts should be focused. Other colleges, such as Georgia State University and the University of Arizona, are trying to predict which of their current students are most at risk of dropping out, and how best to help these students with additional support.