Become a Patron


Excerpted From: Ian Ayres, Richard Brooks and Zachary Shelley, Affirmative Action Still Hasn't Been Shown to Reduce the Number of Black Lawyers: A Response to Sander, 69 International Review of Law & Economics 1 (March 2022) (37 Footnotes) (Full Document)

 

AyresBrooksShellyWriting in The Journal of Blacks in Higher Education, in 1994, Stephan Thernstrom described in stark terms what he felt were the likely consequences of “an unfortunate ‘mismatch’ between students and institutions” caused by “racially preferential admissions policies in higher education.” He argued that, rather than helping their purported beneficiaries, these policies set minority students up for failure by promoting them to schools otherwise beyond their reach. “Unable to stand the competition, many will not make it to the end; those who do obtain a degree will be heavily overrepresented at the bottom of the class and conspicuously absent from the top.” He was not the first to make this claim. Thernstrom cited Thomas Sowell (1972) as “one of the first scholars to advance this criticism [then] more than two decades ago.” Sowell in turn has pointed to “the late Professor Clyde Summers of the Yale Law School [as] the first person to explain, back in 1968,” the hypothesis:

that admitting black students to top-tier institutions, when they had academic qualifications that were at a level that fit second-tier institutions, meant that second-tier institutions now had a reduced pool of suitable black applicants and would have to dip into the pool of black students whose qualifications fit the third tier--and so on down the line.

At all levels, then, black students would be less credentialed than their white counterparts, causing a discrepancy that, it is often claimed or implied, will more than offset any benefits from attending a higher-tier school due to affirmative action. We are recalling here an old hypothesis, one that has been circulating for more than a half century, yet throughout this history, as Thernstrom wrote in 1994, “[w]e do not have systematic evidence with which to test many aspects of the mismatch argument.”

Cue Richard Sander's (2004) “A Systematic Analysis of Affirmative Action in American Law Schools.” Based on a simple tier-index framework used to analyze data from the Law School Admissions Council (LSAC) Bar Passage Study (BPS), Sander (2004) argued that “a strong case can be made that in the legal education system as a whole, racial preferences end up producing fewer black lawyers each year than would be produced by a race-blind system.” That affirmative action suppresses the production of black lawyers (by 8% according to his estimates) was an extraordinary and admittedly remarkable claim, to which Ayres, Brooks and a number of other scholars were invited to reply (in the law review that published the original assertion) along with a response by Sander (2005). In an appropriate spirit of collaborative and rigorous exchange, Sander openly shared his data and program files with Ayres and Brooks, who did the same in turn. On careful review of Sander's (2004) approach, Ayres and Brooks concluded that “a strong case” could not be made with his data and analysis to support a causal claim that the use of racial preferences reduces the number of black lawyers. Hence the title of Ayres & Brooks' (2005) reply, “Does Affirmative Action Reduce the Number of Black Lawyers?”.

Ayres and Brooks observed that the conclusions from Sander (2004) rely upon several faulty assumptions, including the assumptions that in a world without affirmative action, white and black law students with the same LSAT and undergraduate GPA would become lawyers at the same rate and black law students would attend the same tier schools as white students with the same observable entering credentials. They explained that these two assumptions are not supported by the data, finding that “whites with similar credentials themselves go to a variety of different quality tiers” and pointing out that “[b]lack students are less likely to benefit from legacy preferences and are more likely to be financially constrained.”

It is important to recall the motivation and bases of critique in Ayres & Brooks (2005), which were not driven by skepticism regarding the theoretical possibility of affirmative action causing a detrimental mismatch effect on its beneficiaries, as Summers, Sowell and Thernstrom among others had long speculated. Indeed, it would be surprising if some sufficiently large degree of mismatch could not generate the effects that Sander claimed. Rather, Ayres and Brooks questioned whether Sander (2004) had, with his data, identified a causal effect of “academic mismatch” on the production of black lawyers. It was not simply the absence of a credible identification strategy that was troubling, but also the quality of the data themselves, which coarsened school-level observations into 6 rough tiers. Ayres and Brooks introduced their analysis of students that attended their second-choice school as an imperfect, but still more plausible identification strategy than employed by Sander (2004), while recognizing the continuing limitations of the data. “We make this point,” as Ayres and Brooks (2005) observed, “not as an indication of the quality of the analysis, but as a strong statement about the weaknesses of the data (for this question) on which we and Sander rely. We leave it to the responsible reader to make of it what she will.”

Given these constraints, Ayres and Brooks were cautious in constructing and drawing conclusions from their second-choice analyses. Sander (2005), on the other hand, who had then replicated their analysis and performed his own, was over-the-top exuberant in describing the findings as “stunning,” “striking” and “a nearly perfect demonstration of the mismatch effect.” Neither the caution expressed by Ayres and Brooks nor the exuberance betrayed by Sander in interpretating the analyses of bar passage data has changed over the past decade and a half. Yet, with “Replication of mismatch research: Ayres, Brooks and Ho” (2019), Sander sought to reopen the discussion on “academic mismatch theory” with a replication and critique of Ayres & Brooks (2005), which offered two alternative analytic approaches to Sander's original analysis. In his replication and critique, Sander addressed both the second choice analysis and relative tier analysis from Ayres & Brooks, claiming that once modifications are made to these models, their results are supportive of the theory that mismatch between the academic ability of black law students and the academic rigor of their law schools has hurt the career outcomes of black law students.

Specifically, Sander points out that due to a coding error, Ayres & Brooks “mistakenly included in their sample” 109 students (in their total of 7212 observations) who should have been excluded because of missing responses on the key selection criterion. This was due to a genuine coding error, although not a “deliberate” one as suggested by Sander. Ayres & Brooks (2005) missed it, we will not now minimize it, and we thank Sander for bringing this to our attention. Correcting the code and running sensitivity checks, produces results that are neither qualitatively nor significantly different from those observed in Ayres and Brooks. As for the other so-called “actual errors” to which Sander points, these were in fact deliberate choices made and discussed by Ayres & Brooks to address their concern about the quality of the data and our confidence in causal inferences drawn from our empirical strategy. Sander proposes alternative choices that he finds more compelling (and we do not), yet analyses based on these choices are properly seen as robustness checks on Ayres & Brooks's original findings. Sander's robustness checks use different sub-samples of the BPS data and different definitions for assigning tiers in the relative tier analysis.

Since Sander (2019) is, at its core, a robustness check for Ayres & Brooks (2005), it can at most cast doubt on the usefulness of the BPS data for answering the question of law school mismatch--doubts that, as noted above, Ayres & Brooks previously raised more than a decade ago. Importantly, these robustness checks do not fundamentally change the conclusion reported in Ayres & Brooks and instead merely highlight their concerns about the reliability of the BPS data for answering questions about law school mismatch. When properly implemented and interpreted, some of the results that Sander (2019) holds up as indicative of the harmful effects of academic mismatch in fact contradict his claim. Other results presented in Sander (2019) raise concerns about the sensitivity of the BPS data to minor changes in sample selection criteria, but none of the results provide compelling support for Sander's strong claim that affirmative action has reduced the number of black lawyers.

On the fundamental assertion advanced by Sander--that is, that racial preferences end up producing fewer black lawyers each year than would be produced by a race-blind system--we remain unpersuaded that the available data and methods allow causal identification. Bjerk (2019) outlines several concerns about Sander's interpretation of his results and provides an explanation of some of the flaws in utilizing the BPS data for causal analysis. In the sections that follow, we discuss the underlying theory of mismatch, provide several critiques of Sander's replication, present additional robustness checks, and expand upon several of Bjerk's points.

2. Second choice analysis

Sander begins his discussion of Ayres and Brooks with a critique of their second-choice analysis. In a footnote, Sander (2019) encourages his reader to look at “the ‘second-choice’ section[,] pages 1827-1838” in Ayres & Brooks (2005). We would recommend the same, both for the motivation of the analysis and for its conclusions, which are obscured and mischaracterized by Sander's (2019) discussion. To briefly summarize: the second-choice approach proposed by Ayres and Brooks exploited the fact that the Bar Passage Study contains data on students who reported being accepted to their first-choice law school as well as whether they attended that school or attended their second-or-lower-choice school. By comparing those students who attended their first-choice to those that were also accepted to their first-choice but chose to attend their second-or-lower (presumptively less competitive) choice school, Ayres and Brooks implemented a crude application of the approach taken by Dale and Krueger (2002) to estimate the effects of attending more selective colleges. In the law school context, a second-choice approach showing, for instance, that black students perform better when attending (presumptively) less selective schools could bolster the argument that affirmative action is causing “an unfortunate ‘mismatch”’ effect on its minority beneficiaries. Sander referred to the second-choice identification strategy “an ingenious way of getting around some of the key limitations of the [Bar Passage Study] data.” Calling it ingenious is surely an overstatement, as it follows immediately and derivatively from Dale and Krueger among others, still there are subtleties in the theoretical application of the approach that are easily overlooked and require careful attention.

In the following subsection, we address several issues that prevent a second-choice analysis from providing strong causal inferences: 1) second-choice analysis fails to differentiate the effects of undermatching and overmatching; 2) to make causal claims, second-choice analysis requires unsupported assumptions about linearity and monotonicity of effects; 3) confounding variables interact with race for selection into attending a second-choice school; and 4) confounds also interact with race while students are in law school in ways not accounted for by the second-choice analysis. We also address how Sander's second-choice outcome variables interact with this more nuanced understanding of the second-choice analysis.

2.1. Theory

Two distinct modes of mismatch are suggested in the general hypotheses. First, students may be outmatched, or “overmatched,” by their peers along key dimensions of interactions or qualifications within a given environment. Second, students may outmatch, or be “undermatched” with, their peers. As typically stated, the mismatch hypothesis focuses on overmatching. These two distinct modes--i.e., being overmatched and being undermatched--are often conflated, confused, or entirely disregarded in discussions of academic mismatch. It is, however, important to distinguish these separate effects and how they related to each other both in practice and in demonstrating their impacts.

However, second choice analysis is ill-suited to separate out these effects. When a student chooses to attend their second-choice, presumptively less competitive school, they increase the likelihood that they are undermatched and decrease the likelihood that they are overmatched. To bring the second-choice analysis to bear on the conventional mismatch critique of affirmative action is to leverage evidence of both undermatching and overmatching in support of an argument solely about overmatching. As Table 1 shows, attending a second-choice school increases the likelihood of being undermatched (as measured by being above the tier mean for both GPA and LSAT) and decreases the likelihood of being overmatched (as measured by being below the tier mean for both GPA and LSAT). In other words, the evidence on mismatch is not direct but rather indirect.

Demonstration through indirect evidence is certainly not disqualifying, but it does require careful thinking about the channels through which the indirect evidence can be made to reach the arguments it is called on to support. While it may be tempting to see overmatching simply as “a mirror image” of undermatching, as Sander suggests, in all likelihood the relationship between the two phenomena is not so straightforward.

To be clear, when Sander argues that the use of racial preferences in law school admissions causes black underperformance (and therefore affirmative action should be eliminated) he is making an overmatching claim. A basic challenge in supporting this claim with a second-choice analysis is that the second-choice estimates for black students will capture the combined effects of changes in two types of academic mismatch. Any second-choice analysis on minority students is confounded by the fact that the strategy estimates the net effect of decreasing overmatch and increasing undermatch. Invoking such analyses in arguing for the elimination of affirmative action is a tricky exercise. The question now becomes under what conditions may evidence of changes in both undermatching and overmatching inform an overmatched claim.

One might look for some sort of quasi-linear or monotonic local relationship on the degree of relative mismatch. The mismatch hypothesis posits that being overmatched may manifest in objectively worse outcomes than would have been achieved absent the overmatch. But it is not clear a priori whether undermatching or overmatching will in fact result in worse outcomes for the mismatched students. If being overmatched leads to reduced expected performance, being slightly undermatched by peers might lead to increased expected performance. Non-locally we would not expect linear or even monotonic changes in performance from sufficiently large amounts of overmatching. While a sufficiently large mismatch would almost certainly harm student outcomes, it is not clear how large a mismatch would need to be to cause this detrimental effect, or whether the differences in the quality of student credentials in law schools are sufficiently large to cause this effect.

Fig. 1 presents several potential relationships between mismatch and student performance. For example, the relationship presented in Panel A would suggest that overmatching is always harmful, while undermatching is always beneficial. Panel B shows a relationship in which being either undermatched or overmatched on incoming credentials is always harmful. And Panel C presents a relationship in which a slight degree of either form of mismatch may be beneficial, while a large degree of mismatch would be harmful. Without greater clarity over which of these or other possibilities describes the true relationship between mismatch and student performance, utilizing mixed undermatching and overmatching evidence to assess overmatching claims may result in erroneous claims.

We believe the most reliable evidence on undermatching from second-choice analyses may be found among white students (not black or other underrepresented students). That is, even assuming that black students are more competitive vis-…-vis their peers at their second-choice rather than first-choice schools, they may still (by supposition) be overmatched at their second-choice school due to affirmative action, or they may be undermatched. So, it is difficult to discern whether any demonstrated effects in the second-choice analysis of minority students are due to increased undermatching or decreased overmatching. Even if a second-choice analysis did find improvements among second-choice black students, the effect may be attributed to some racial preference in admissions, though perhaps not as much as might have been the case at their first-choice. Following this logic would not recommend an elimination of affirmative action so much as a reduction or other change in the degree or manner of racial preferences in law school admissions--and there would be no reason from this limited perspective to presume that zero preference is the optimal quantum.

To test the validity of the undermatched hypothesis implicit in the second-choice analysis, it would be best to remove the clouding effects of affirmative action which may render some underrepresented students overmatched and others undermatched at their second-choice school. White students in the sample, by assumption, are not subject to the direct distortionary effects of affirmative action, so they represent a cleaner (but still far from perfect) test of second-choice undermatching. This means that white students who attend their first-choice school should, on average, be matched to their white peers, while white students that attend their second-choice or lower school should, on average, be undermatched with those peers. Therefore, for white students the measured second-choice impact may primarily identify an effect of being undermatched that can then be brought to bear as evidence on the effects of affirmative action on overmatched white students (recall they too are beneficiaries of affirmative action by Sander's hypothesis). Even this cleaner test requires the unsupported assumption that there is linearity across degrees of mismatch to make any causal claims about undermatching versus overmatching. While white students at their first-choice school will on average match the observable credentials of their white peers, there is still variance in incoming credentials among white students. This means that some white students will be undermatched and others will be overmatched at their first-choice school. So, attending a second-choice school would lead some white students to be undermatched to a greater degree than they already were and others to simply be less overmatched. While more white students attending their second-choice school should be undermatched than overmatched, claiming that undermatching is driving any observed effects requires the effects of mismatch be linear. That is, decreasing overmatching would need to have an equal effect to increased undermatching, cancelling out the limited overmatching and leaving just the more dominant undermatched effect. Without this assumption, white students' outcomes at their second-choice school may not be useful as indirect evidence on overmatching.

But what about black and other underrepresented students? Even if the assumptions necessary to support a second-choice analysis of undermatching amongst white students hold, does the second-choice (undermatching) evidence based on white students inform the affirmative action (overmatching) claims about black and other minority students? It could if you believe that there are no direct or unobserved effects of race in undermatching and overmatching. Assuming race-neutrality in the selection and other mechanisms at play, the second-choice (undermatching) analysis on white students could well inform the claim that affirmative action (through overmatching) harms its minority beneficiaries. However, this assumption does not appear to be well-founded. Race may be correlated both with selection into a second-choice school and with the effects of undermatching or overmatching during law school.

Here it is important to note that Sander's argument draws heavily on the assumption of race-neutrality. That is, he assumes no racial differences or effects of race in student performance. In his model, it is only the mechanism of race-based affirmative action that causes minority students to become mismatched--specifically overmatched--and that is what leads to their underperformance in law schools and on state bar examinations. Other mechanisms causing overmatch, such as athletic or legacy preferences would, by this argument, similarly lead to underperformance among athletes and the children of alumni. In Sander's model, but for the mischief caused by race-based affirmative action, which is merely an indirect effect creating the conditions for underperformance, black law students and lawyers would be indistinguishable from their white counterparts on all observable measures. Get rid of the distortionary influences of affirmative action, he suggests, and minority underperformance in law schools, on state bar exams and in professional outcomes will vanish along with it.

Notwithstanding the weight of contrary evidence, there remains more fundamental challenges to causal claims based on conventional mismatch arguments like those posed by Sander. Affirmative action is not the only distortionary influence operative in mismatch games that occur in law schools, on bar exams and in professional legal practice. It is exceedingly difficult to identify the mechanisms through which these influences operate and practically impossible to predict their effects. These effects play out and are typically realized in settings that reveal what is expected from all those present, which makes causal identification notoriously difficult. Mismatch encounters are nothing like double-blind experimental treatments. Participants and observers usually know or think they know who is favored to succeed in these encounters, and those beliefs can significantly impact the performance and conduct of participants and audience alike.

To illustrate the epistemological challenges here, consider a much simpler scenario in which a professional tennis player faces off against an amateur. Lest you dismiss the tennis illustration as inapt to the law school setting, take note that athletic mismatch and academic mismatch are not as attenuated as might be at first be supposed. Both types of mismatch are mediated through psychological processes that can heighten or undermine physical and cognitive capacities. Presenting an athletic task in terms of (dis) favorable social comparisons can inspire or impede a player's performance. Just as an academic task presented in a certain way or context can trigger downward social comparisons or the like to the benefit or detriment of a student's performance. The point is that mismatch effects are rarely only a matter of preexisting differences. To determine why this raises a fundamental challenge to the identification of benefits or harms from being mismatched, let's return to the tennis match.

While the skill levels between the amateur and professional are mismatched, it is unclear how each will perform compared to their baseline against competitors of the same abilities and how the mismatch will affect each player's future performance. Will the amateur rise to the additional challenge of facing a professional and outperform their baseline? Will they perform exactly at the level they typically do or might they succumb to nerves and miss shots that they would have landed had they been facing another amateur? Similarly, will the professional take the opportunity to land their shots perfectly, or might they lose focus and drop points that they would have finished off against another pro? It is unclear a priori which of these scenarios is most likely to occur and the outcome is likely to vary across individuals.

It's not that mismatch has no predictive power--it is quite likely that the professional would win this match. Similarly, one should not be too surprised if a student with incoming credentials substantially below the mean of their classmates performs less well than their peers in tasks predictably correlated with those incoming credentials. But, outside of a zero-sum context, that's not the only comparison or even the most relevant one. Crucially, we might ask how a mismatched student performs against herself in the counterfactual where there was no mismatch. Does being undermatched help or hurt a student compared to being well matched? What about being overmatched? Are the two dynamics symmetric, allowing us to draw inferences from one dynamic and apply them to the other? In particular, linear application of undermatching to overmatching may not be a reasonable assumption here. It is a basic insight of Prospect Theory that losses weigh on us more heavily than gains, and perhaps there is a similar pattern with overmatching and undermatching. What is the relationship these two types of mismatch and how ought we properly apply evidence of second-choice undermatching to an affirmative-action overmatching argument? We don't know.

Without a controlled experiment or a full grasp of the mechanisms at play, we must be careful in making causal claims. This is why Ayres & Brooks (2005) included data filters in their analysis. Any willingness to dispense with filters that provide a tighter match between the treatment and control groups or to unreservedly embrace evidence from a second-choice framework misses the broader point and the difficulty of the empirical questions at hand. Does affirmative action place black students in a more competitive environment? Yes, definitely. Should applicants who are admitted to schools where they are mismatched on incoming credential attend those schools or aim lower (or higher)? Does that make them in effect wildcards, who are, as Thernstrom wrote, “[u]nable to stand the competition [and]. not make it to the end”? Does affirmative action make it less likely that black law students will become lawyers? On this empirical question, our second-choice analysis doesn't suggest so. In a more competitive environment, they will have to work harder and may not find it so easy to ace exams. That's true of everyone in more competitive environments. The bottom half of every class is always mismatched against the top.

Should a black student impacted by affirmative action or a white student with legacy benefits turn down Stanford law school to achieve better grades and a high chance of passing the bar on the first try? No second-choice analysis can answer this question. Even if a second-choice analysis found positive effects of undermatching on bar passage, you must look elsewhere. In life, we are regularly confronted with this “most common question: Play it safe, or take a risk?”.

Even disregarding the forgoing critiques of the usefulness of second-choice analysis for assessing affirmative action programs, we are skeptical that some of the outcomes Sander uses in his second-choice analysis are of much use. Sander's article emphasizes second-choice results with first-year or overall law school grades as the outcome variable. However, these variables do not provide direct evidence on the question of whether affirmative action affects the number of black lawyers. Thinking back to the tennis example discussed above, we may ask not only how the players will perform in this individual match, but rather how the mismatch will affect their long-term performance. This difference is particularly important to consider when assessing Sander's choice of second-choice outcome variables. Regardless of whether a student becomes less overmatched and more undermatched by attending their second-choice school, it is likely that they have shifted themselves up the grading curve among their classmates--even if that change in relative ranking comes at the expense of their overall learning experience. This may result in mismatch having different relationships with law school GPA and bar passage. The fact that students may move themselves up a grading curve but harm their overall learning experience by attending their second-choice school suggests that mismatch and law school GPA may have a relationship like that in Panel A of Fig. 1, while bar performance may exhibit an entirely different relationship like that in Panel B. Without stronger evidence than is currently available, it is not even possible to determine whether the outcome variables Sander utilizes are relevant to his primary question.

[. . .]

“Replication of mismatch research: Ayres, Brooks and Ho” does well to point out a coding error in Ayres & Brooks regarding respondents who failed to answer the second-choice question and to provide robustness checks for that paper. These robustness checks suggest that the strength of some results in Ayres & Brooks may be sensitive to specification, highlighting the difficulty of using the Bar Passage Study data for causal inference. A fact that should give pause in making unrestrained causal claims based on the data and analyses. Sander pursues the opposite tack, going too far in making an unbridled assertion of definitive evidence of mismatch between black law students and their schools. When implemented and interpreted correctly, many of Sander's results run counter to the claim that their “results add significantly to the body of research finding support for the law school mismatch hypothesis.” Just as reported in Ayres & Brooks, these results provide mixed evidence on whether there is mismatch that occurs between law students and their schools, measured by law school grades and first-time bar passage. These results do reaffirm that there is no evidence suggesting that the number of black lawyers, measured by eventual bar passage, is affected by academic mismatch.

After evaluating Sander's recent replication and reevaluating our own prior analysis, we find ourselves largely where we were in 2005, hesitant and doubtful about the empirical identification of mismatch given the limits of the data and methodological approaches, in contrast to Sander's zealous confidence. As before, even when observing the same results as us, Sander appears disposed to reach different conclusions. There's nothing essentially wrong with that. Commenting on our differing dispositions in 2005, Sander wrote: “Let me be clear: I do not question Ayres and Brooks's good faith in consnottructing their alternate methodologies and sounding a general negative tone[;] I think, rather, that their response provides an exceptionally good example of how even fair-minded researchers can look past the data when they have fastened too early upon conclusions they think they should reach.” We would, today, return the compliment, seeing no reason to question the good faith or fair-minded researchers in their reasoned methodological choices and approaches. Yet we do wonder if Sander himself has become too fastened, too early and too firmly on a conclusion while the data themselves remain ambivalent and inconclusive.

“There are,” as Sander observed, “many uncertainties built into any prediction about how a change to race-blind admissions would change the production of black lawyers.” Regrettably, those uncertainties are not resolved by the analyses of the Bar Passage and other data on which Sander and we relied. Torturing the data will not assure reliable confessions; they are too limited to say anything conclusive. Ultimately, we join Sander in the hope that improved data and research methods will allow clearer insight into the true impact of affirmative action. We are particularly hopeful that additional research will illuminate the institutional factors that affect law students' career outcomes including and beyond bar passage.


Ian Ayres,  William K. Townsend Professor of Law, Yale Law School. 127 Wall St Room 254, New Haven CT 06511, USA Professor Ayres has served as an expert witness in several cases assessing narrow tailoring of affirmative action in government procurement.

Richard Brooks, Emilie M. Bullowa Professor of Law, New York University School of Law, USA

Zachary Shelley, Research Fellow, Yale Law School, USA 


Become a Patron!