The Baylor Law admissions office had a bit of an oopsy recently: they sent an email to every incoming student disclosing all of their admissions data: name, address, phone number, GPA, LSAT, admissions acceptance date, race, and scholarship money, a treasure trove of data rarely available to researchers. If you believe Elie Mystal at Above the Law, the data shows that affirmative action isn't such a big deal:
Eyeballing the numbers (and I haven't done a full statistical analysis on this data because I think it's kind of missing the point), I see about a three to four point bump for African-American or Hispanic students. By "bump," I mean to say that if you were a white student, you had a fighting chance to get into Baylor with a 161 or 162 LSAT score. If you were black or Latino, you were in the running with a 159 or 158. There are some outliers, of course -- a black kid with a 156, a white kid with a 158 -- but, in general, I'm eyeballing the mode for white students at 162, and the mode for blacks and Hispanics at 159 or 158.
This is wrong for a couple of reasons. First, there's an iceberg effect; the spreadsheet doesn't have the data of the people who were rejected for admission. If a 3.7 GPA/162 LSAT gets a white a 30% chance of admission, but an African-American a 90% chance of admission (or vice versa), then there's racial bias with real adverse effects on the disfavored race, even if the averages in the admitted student body doesn't show a lot of disparity. But it's wrong because there's no reason to "eyeball." It's already in a spreadsheet; do an hour of work and run the real numbers. David Lat was kind enough to forward the spreadsheet to me; I deleted the names and addresses and phone numbers, and went to work with the quantitative information.
At a first superficial glance, the data suggests no real affirmative action bump. (NB that the ATL post is incomplete, because it doesn't account for students who identified as multi-racial: thus, some students identified in the ATL post as Caucasian were counted as ethnic minorities in my analysis. See the footnote below for more information.) The sixty non-Asian minorities averaged 3.58 GPA and 162.9 LSAT; the 42 Asians averaged 3.44 GPA and 164.4 LSAT; the 329 whites and did-not-identifies averaged a 3.54 GPA and 164.5 LSAT. As Elie eyeballed, there's only a point or so LSAT difference on average. (But see the April 10 update, as this may be an artifact of the sample that is hiding a larger affirmative action bump.) Note that even these averages hide some shocking disparities in the admissions data.
The most obvious one is that only 14 students in the entire class of 441 (again, see the footnote after the jump) identify as African, African-American, or multi-racial with African ancestry. Baylor may not be giving much of an admissions bump to African-Americans, but the consequence of not reducing admissions standards for African-Americans is that the class is only 3% black. A different university trying to up that number is likely to see a larger disparity in GPA/LSAT scores. (See, however, the April 10 update, as a data sample artifact may be influencing this particular result.)
But Baylor itself does see a big disparity in another metric. I sorted the 431 students with "LSAT Index" scores. (An LSAT Index adds the LSAT to 10 times the GPA.) The top quartile is above 202 (e.g., 3.9/163 or 3.5/167); the median in 199 (e.g., 3.3/166 or 3.8/161), the bottom quartile is below 197 (e.g. 3.6/161 or 3.3/164). Baylor did not vary from the LSAT index often, only 2% of the class was below 193 and the lowest index was 189. (Again, see, the April 10 update.)
In the top quartile (and stretching down to the top 128 admittees), there was a single African-American. So it's not accurate to say affirmative action makes little difference. The 4.0/170 white with a 210 Index gets a full scholarship to Baylor Law. The 4.0/170 black with a 210 Index might get the same offer, but doesn't accept the full scholarship to Baylor Law: she presumably has better options available to her. One would expect a 4.0/170 African-American to end up at a top-14 law school. Moreover, the 3.7/167 African-American generally isn't accepting the offers to attend Baylor Law, either. If we expect the top 10% of the class and the editorial board of the Baylor Law Review to be much more likely to come from the top quartile of applicants, African-Americans are going to be even more underrepresented than that 3%. If nothing else, larger bumps of affirmative action are having an effect on Baylor Law's diversity.
But the real difference was in the scholarship money. Though non-Asian minorities had slightly lower Index scores on average, they averaged $24,231 in scholarship money; whites and Asians averaged under $20,000. It's unclear to what extent Baylor Law considers financial need in scholarship money, but it's clear that merit makes a big difference. Over 90% of students with Index scores above 206 got full scholarships (the three who didn't were white); less than 3% of students with Index scores below 202 got full scholarships, and all seven were African-American or Hispanic.
Again, we don't have to eyeball; we can perform a basic regression on GPA, high LSAT score, and race.
|Variable||Coefficient||Std. Error||t Stat|
Adjusted R-square = 53.7%. N = 431.
To translate into English, a bump of GPA of 0.1 is worth, on average, $2679 in scholarship money; a single LSAT point is worth $2190. But that does not compare to the scholarship money awarded for being a Star-Bellied Sneech with the power to Bestow The Experience Of Diversity upon Lithuanians and Macedonians and Jews who would otherwise be deprived. Checking the African-American box on your Baylor Law application is worth $9575/year, all else being equal; Hispanic heritage is worth $7023. In other words, if you were a white with your heart set on a scholarship to Baylor Law, and a magical genie offered you the choice of increasing your LSAT score by 4 points or changing your skin color, you'd be better off financially with the latter option. (Perhaps the disparity can be explained by financial-aid factors, if African-Americans are more likely to apply from families without household assets; if so, that data is absent from the spreadsheet. I strongly suspect these are just merit scholarships, as the only financial data reported in the spreadsheet is the scholarship amount.) All four variables are statistically significant well above the 99% level. Corporations have paid millions of dollars in disparate impact litigation settlements on far flimsier evidence. And hundreds of law students short-changed tens of thousands of dollars of scholarship money each over three years because of their skin color? Sounds like a multi-million dollar class action to me.
I don't mean to single out Baylor; it's undoubtably the case that virtually every other top-sixty school is engaging in the same shenanigans in competing for a limited pool of qualified African-Americans. (NB that I am not claiming that the pool is limited because African-Americans are not smart enough to go to law school; miserable urban public school systems and the disproportionate number of single-mother families surely do a lot to depress the number of African-Americans who get the education to do well at college and apply to law school.) As Richard Sander has noted, Baylor's plight is created by the better-ranked schools above it poaching the African-American students who would otherwise be at the top of the Baylor Law class; Baylor has to pony up extra scholarship money just to attract the handful of African-Americans it does have. But it really surprises me that a group as litigious as white law students hasn't done more to ask for the law to be evenly applied; this is a much easier case for plaintiffs than complaining about alleged consumer fraud in employment statistics.
Related. Discussion of statistical methodology (and an important update) after the jump.
*This footnote explains how I handled some odd quirks in the data. I excluded ten students from the sample who did not have an LSAT Index score; they seem to have been mostly international students who did not have a conforming GPA, and my sample used the remaining 431 students' data. The spreadsheet includes listings for every student's LSAT, but I only used the highest LSAT score, because that's what the admissions office seemed to be doing: if a student scored 159 and 171 in two LSATs, I counted them (as the spreadsheet seemed to do) as a 171. If the admissions office was doing something else, that might change the results slightly. (The fact that the LSAT score has slightly less predictive value suggests that the admissions office does mildly discount multiple LSATs, but I didn't try to model for that complication.) The Baylor spreadsheet contained two columns of racial information; dozens of students identified as multi-racial. If a student identified as both Caucasian and African-American, I counted them as African-American. If a student didn't identify their race, I assumed that they were not seeking an affirmative-action bump, and counted them as white, as the admissions office would have. I also tested for whether gender and Asian heritage made a difference in scholarship money; those figures were not statistically significant. Native American ethnicity was not statistically significant, either; I suspect that admissions officers are discounting all the lily-white students who claim the Cherokee great-grandfather. I will email my redacted spreadsheet and its calculations to academics and think-tank fellows/researchers upon request.
Update, April 10: Welcome Above the Law readers. I wanted to discuss a couple of emails.
One reader, Fred Var, took the data set and tested for whether quality of undergraduate institution (as ranked by Forbes) or geographic diversity (e.g., not being from Texas) made a difference in scholarship funding. He reports that the effect was not statistically significant; I haven't tried to replicate his results.
More importantly, an admissions officer writes me to say
So far as I could tell from the initial screw-up on Baylor's part, the spreadsheet only includes those admittees who had not yet either deposited or withdrew (the email was about extending the deposit deadline. So the data does not include the full picture - presumably the easy-choice high end (people who got into much better schools or got better scholarships from equivalent schools) already withdrew and the easy-choice low end (people for whom Baylor's offer was their best offer) already accepted. The group you're looking at should be only the indecisive people and the people who are weighing only offers from similar schools with similar scholarship offers (or slightly better schools with slightly worse offers or vice versa).
In other words, it is fairly likely that there are minorities with much lower numbers who have already taken Baylor's offer, and lots of non-minorities and minorities at the top end who have already rejected it.
If this is true (and it does appear to be so, given this interview with an affected student), it has a big effect on the analysis being done. Because of Mystal's confident proclamations that the spreadsheet could tell us about admissions policies, I had assumed that this was the full list of admittees. But if it's only a partial list of admittees, then there's another iceberg effect, as we're not seeing the students who got such good offers from Baylor relative to other schools that they already accepted, or those who have already rejected Baylor. We can't conclude that Baylor isn't giving a large affirmative action bump if minorities with low Index scores are disproportionately likely to accept pending admissions offers. That would affect the averages of admitted students, though as I previously noted, we can't tell much from those averages without also knowing the data for rejected students. But the fact that we don't have a complete universe of admitted students may well mean that the affirmative action bump is even larger than indicated here if the demographics of students who accept early differ from those who do not.
That said, with respect to the scholarship data, there shouldn't be any reason to think that the group of students who decide early—pro or con—were given scholarship offers via a different formula than those who have not yet decided. We don't know that, but I can't think of a particular reason to think that this sample of 431 undecided students was awarded scholarship money on a basis different than a randomly selected group of 431 Baylor Law admittees. The dataset is incomplete, but, to use the statistical term, I don't think it is biased. The scholarship regression produces such dramatically statistically significant results that I would be surprised to find a more complete data set obliterated the statistical significance. I'm welcome to be corrected on my thinking here, though, as there might be an effect of this selection process I'm not contemplating.
I've added parentheticals to the text of the post to reflect this update.