Evidence Commentary 130 Harv. L. Rev. F. 301

Applying Empirical Psychology to Inform Courtroom Adjudication — Potential Contributions and Challenges

Modern Courts Commentary Series


Download

While the American public grapples with the recently introduced concept of “alternative facts,” U.S. courts have their own categories of “facts” to contend with. The legal system has traditionally been oriented toward determining “adjudicative facts” — alleged specifics of a case that parties put forth through witness testimony and other forms of evidence to advance their legal claims.1 Legislative facts” — information from outside the case that has “relevance to legal reasoning and the lawmaking process”2 — can also play a role in judicial adjudication. At the same time, many legal rules, doctrines, and procedures seem to assume as “fact” that judges and jurors process information and reason toward outcomes according to legal expectations.

This Commentary looks to how research at the intersection of law and psychology can help inform courts about cognitive realities that are pertinent to the cases before them. Empirical psychology studies can offer insights into law and legal decisionmaking, while testing legal assumptions to improve the accuracy and fairness of the legal system.

Psychologists have had a longstanding interest in legal psychology, but more recently, there has been a burgeoning of interdisciplinary scholars who are combining tools of psychology and legal scholarship to ask new questions about the law. The resulting research is increasingly grounded in legal doctrine and procedure, with an eye toward normative and prescriptive implications for the legal system. Researchers have drawn upon the theories and methodologies of psychology to shed light on, inter alia, the mental processes of key legal players,3 the psychological dynamics underlying particular substantive areas of law,4 and general social and cognitive phenomena that may broadly, but often covertly, affect the operations of the legal system.5

To illustrate some of the potential contributions and challenges of applying psychology to inform courtroom adjudication, I will draw upon examples from my own research, which seeks to bring empirical understandings of the human mind into mainstream legal scholarship and discourse. The first line of work I describe investigates the operation of a psychological phenomenon that can potentially skew judicial decisionmaking, and points toward possible remedies that depend on decisionmakers being made aware of this cognitive effect. The second line of work explores potential entry points for legal misunderstanding and bias in lay decisionmaking, and speaks to the responsibility judges bear when conveying the law to jurors. These examples serve to demonstrate some types of insights that law-and-psychology research can offer courts, as well as some methodological limitations and potential means by which to address them. I will conclude by highlighting a few promising pathways toward facilitating judicial access to empirical psychology findings.

Motivated Applications of the Exclusionary Rule

One way psychology can help inform adjudication is by offering a window into cognitive processes that operate below the level of consciousness, even among judges. Consider the Fourth Amendment exclusionary rule, which generally bars the use of illegally obtained evidence in criminal cases no matter the crime charged.6 Do judges actually determine the admissibility of evidence without regard to the nature of the defendant’s alleged offense? Or are the exclusionary rule’s malleable exceptions (such as “inevitable discovery” and “good faith”)7 entry points for inadvertently motivated decisionmaking that could skew legal outcomes based on doctrinally irrelevant factors, such as the egregiousness of the defendant’s alleged crime?

One psychological phenomenon that I have explored in this context is “motivated cognition” — a human tendency to reason toward preferred outcomes by perceiving, interpreting, or evaluating information in a biased manner, without realizing one is doing so.8 Even when seeking to do nothing other than faithfully apply the law, legal decisionmakers may be susceptible to motivated cognition because “the more extensive [cognitive] processing caused by accuracy goals may facilitate the construction of justifications for desired conclusions.”9

To test for the operation of motivated cognition in admissibility judgments, I conducted a series of psychology experiments on the exclusionary rule and its “inevitable discovery” exception. This exception allows evidence that was illegally obtained by the police to be admitted if a judge concludes that it eventually would have been lawfully discovered were it not for the illegal search and seizure in question.10 My research showed that when lay participants acting as judges were faced with pivotal but illegally obtained evidence of a morally repugnant crime that triggered a strong motivation to punish (selling heroin to high school students), they tended to construe discovery of the evidence as “inevitable,” which enabled them to recommend that the “tainted” evidence be admitted under the legal exception.11 By contrast, when an identical illegal police search uncovered evidence of a less egregious crime (unlawfully selling marijuana to cancer patients), participants were over three times more likely to suppress the challenged evidence, construing the search as calling for application of the exclusionary rule without exception.12 This difference in suppression outcomes between the two cases appeared to be mediated by the decisionmakers’ perceptions of the defendant as more immoral and deserving of punishment in the heroin condition than in the marijuana condition.13

Furthermore, even though the police misconduct was exactly the same (and unambiguously illegal) in both the heroin and marijuana cases, the decisionmakers construed the police officers as less morally culpable and less deserving of negative consequences when their search happened to uncover evidence of the more egregious crime.14 This result is particularly noteworthy given that some of the U.S. Supreme Court’s recent rulings on the exclusionary rule have turned on the perceived degree of police wrongfulness in challenged searches.15 Furthermore, critics of the rule have suggested replacing it with broader reliance on civil damages suits against offending officers.16 What is legally supposed to matter in such criminal suppression hearings or civil suits is “the extent to which the police have deviated from prescribed norms, not the extent to which the defendant has.”17 However, my results suggest that legally irrelevant feelings about the defendant’s crime may drive legal conclusions about evidentiary exclusion and police wrongdoing. Later, I will address the applicability and replication of these experimental findings vis-à-vis actual judges, and discuss additional research pointing toward a potential means of curtailing the demonstrated motivated cognition effect.

Legal Misunderstanding and Bias in “Constructions” of Criminality

In addition to making judges cognizant of psychological effects that could influence the legal determinations they make, empirical psychology can inform another important judicial function: instructing the jury. When jurors are tasked with applying highly discretionary legal standards, courts need to ensure that these lay decisionmakers are equipped with an accurate understanding of the law. The stakes in this process are high, especially in criminal trials. And yet the legal system is largely in the dark about how jurors absorb, interpret, and apply the factual and legal information they are given to determine criminal liability.

Criminal attempt is one area of law that may be especially susceptible to disconnects between legal expectations and cognitive realities in this regard, due to the ambiguity and vagueness of the legal standards that define the offense. The Model Penal Code’s “substantial step” test, which seeks to “extend the criminality of attempts” by imposing liability at an earlier point toward committing a crime, is legislatively intended to be relatively more prosecution-friendly than the common law’s “proximity” test, which theoretically sets the threshold for attempt liability closer to completing the crime.18 But both of these standards default to opaque language in defining when criminal liability attaches. At what point is a “step” toward an uncompleted crime “substantial” enough to merit criminal punishment? Or how “close” is “proximate” enough to be found guilty of attempt? Judges, lawmakers, and lawyers operate under a generally shared understanding of how these different tests draw their respective lines of liability, but does that trained legal understanding cohere with how jurors assign criminality? Drawing upon psychology theory, I suggest that lay decisionmakers who are called on to make criminal determinations in such circumstances of legal opacity may be at particular risk of delivering judgments that are vulnerable to legal misunderstanding and bias.

To test this hypothesis, I am currently conducting a series of experiments exploring the roles of facts and law in lay determinations of criminal attempt.19 My results thus far suggest that either jury instructions on the different legal standards for attempt appear to make no significant difference to lay constructions of liability or, in some factual circumstances, a defendant may surprisingly be worse off under the theoretically more defense-friendly legal test. Furthermore, lay decisionmakers may be more likely to exhibit biases based on legally irrelevant factors, such as the defendant’s implied religion, when applying one legal test as compared to the other — potentially due in part to the language of the given laws. These and other findings of this research throw into question general legal assumptions about how lay decisionmakers operationalize facts and law, highlighting potentially critical gaps between what is supposed to govern and what actually does seem to govern lay constructions of criminality. Such possible ruptures in communicating legal expectations to jurors could hinder the duty of courts to enforce legislative intent, as well as broader legal values of equitable adjudication.

Pathways Toward Legal Reform

Theories and methodologies of psychology can be used not only to identify potential misalignments between assumptions of the law and cognitive tendencies of legal decisionmakers, but also to confront the downstream question of how such disconnects can be addressed. Pursuing this path is especially important when legal standards call for high degrees of judicial or juror discretion (as seen in the exclusionary and criminal attempt doctrines described above), because disparate and doctrinally unfaithful applications of the law in such arenas, even if unintended, risk weakening the legitimacy of the legal system.20

In regard to motivated cognition in legal judgments, I have shown that even indirectly raising consciousness about legally irrelevant but potentially motivating factors can go a significant way toward curtailing the influence of those factors21 — seemingly by piercing the “illusion of objectivity”22 under which the motivated cognition process operates. Furthermore, in my research on the exclusionary rule, directly forewarning participants that they could be influenced by the egregiousness of the defendant’s crime and encouraging them to confront this legally extrinsic motivation succeeded in thwarting its impact on their admissibility judgments.23

Once sufficiently replicated through research with both lay and judicial decisionmakers, such findings that reveal pathways toward curtailing motivated cognition could be operationalized in various ways. At the most basic level, simply raising judicial awareness by informing judges of research on decisionmaking points shown to be vulnerable to cognitive biases may help them guard against these effects in their own legal judgments. Anecdotally, judges with whom I have shared my above-described research findings on motivated applications of the exclusionary rule have noted that just knowing about this effect now gives them pause when making suppression decisions. At a more systemic level, academics are already working with courts and judicial centers to disseminate relevant empirical psychology findings to the judiciary through conferences and trainings.24 Expanding these opportunities could serve the justice system well.

Going one step further, judges could institute a practice of reading aloud awareness-generating instructions in court to acknowledge and potentially curtail cognitive biases — not only when instructing the jury, but also before their own rulings on significant legal issues, such as the admissibility of pivotal evidence. At a minimum, the symbolic value of judges formally acknowledging the need to avoid potential biases as a step in even their own decisionmaking processes would demonstrate a commitment to fair adjudication that could bolster public confidence in courts.25 Psychology-based efforts toward reform could thus aim to improve both the actual and perceived integrity of the legal system.

Before such interventions are pursued, however, further work is needed to explore their feasibility and efficacy on jurors in real courtroom contexts, as well as on judges. Additional research is also needed to shed light on when and why instructions on potential decisionmaking biases are likely to be ineffective. For example, one of the key findings in my research on motivated applications of the exclusionary rule was that not all types of awareness-generating instructions were successful in curtailing this effect. While instructions grounded in a psychological model of bias correction26 worked, instructions that informed participants either of the legal rationales underlying the exclusionary rule or of existing experimental findings on motivated cognition had no significant effect on curbing the phenomenon.27 Furthermore, awareness-generating interventions are unlikely to remedy consciously motivated legal decisionmaking, which calls for different normative and research inquiries.

Potential risks of “debiasing” interventions also need to be better understood. In some circumstances, instructions may backfire by eliciting psychological denial or rejection depending on the type of bias targeted;28 or judges and jurors may engage in “overcorrection” that unfairly skews outcomes in the opposite direction.29 Thus, there may be as much for researchers and courts to learn from attempted remedies that do not work as from those that do.

In addition to applying psychology findings to guide debiasing efforts in the courtroom, judges can draw upon empirical data to improve jury instructions on the law — an area in which much relevant research has been and is being conducted.30 In my current studies on lay applications of attempt doctrine, participants’ written explanations indicate, inter alia, that those who misunderstood the given legal standards for attempt liability seemed to pick out salient terms — like “substantial” and “proximity,” which appear in standards across various areas of law — and construed them in a manner inconsistent with legislative intent.31 Such findings illustrate how empirical investigations can identify legal concepts and terms that are at high risk of lay misunderstanding, and help lawmakers and courts rethink ways in which to convey them, so that the cognitive experiences of jurors better align with the expectations and goals of the legal system.

Methodological Limitations and Alternatives

While psychology studies like the ones described above allow for causal insights that could inform legal adjudication, questions about their “internal,” “external,” and “ecological” validity, as explained below, may need to be addressed for the results to be of use to courts. Furthermore, the field of psychology has recently been embroiled in debates about a “replication crisis” stemming from failures to reproduce published experimental results.32 This may make judges question whether conclusions from the discipline can be generally relied upon. The suggestions below, although not intended to be comprehensive, offer some routes for addressing validity and replicability concerns.

Internal Validity: Correctly Identifying Cause and Effect

The internal validity of a study — the extent to which a variable of interest can be said to cause an observed effect — depends on tightly controlling for confounding factors that could otherwise be driving the effect.33 For example, one might ask whether decisionmakers are more likely to apply the exclusionary rule’s inevitable discovery exception to admit challenged evidence in cases of more egregious crime not because they are more motivated to punish, but rather due to some other factor having to do with the nature of the offense — such as police officers devoting more resources to solving a more serious crime.

Experiments can control for such potentially confounding factors by holding constant all facts other than the manipulated variable. In my exclusionary rule studies, the police search was identical in both the heroin and marijuana cases; there was nothing in the facts to suggest that the officers knew beforehand what crime their search would uncover; and even a possible alternative avenue for discovery of the evidence was held consistent across both scenarios.

Internal validity concerns can additionally be addressed through variations in study design. For example, further studies on the exclusionary rule could present the same type of drug and target buyers in both experimental conditions, and test the effect of crime egregiousness by manipulating only the purpose for which the drug was being illegally sold (such as therapeutic versus recreational use). Replicating experimental results across different designs can help bolster not only the internal validity of findings, but also their credibility in response to concerns about the reproducibility of reported results.

External and Ecological Validity: Does the Effect Generalize Outside the Lab?

Tight experimental controls for internal validity often entail a trade-off with external validity: the “generalizability” of an effect in terms of whether it holds up across settings and persons beyond the experimental circumstances; and with the related concept of ecological validity: the “representativeness” of an effect in terms of whether the experimental environment, materials, and tasks approximate real-world contexts, stimuli, and behaviors.34 For example, my exclusionary rule experiments reflect two potential concerns in this regard: (1) they used lay participants acting as judges, whereas real admissibility decisions are made by professional judges; and (2) they presented hypothetical cases with controlled facts, whereas judicial responses may be different in actual legal cases with real-life consequences.

The first of these concerns — the question of whether the legal training and repeat decisionmaking experiences of actual judges protect them from the motivated cognition effect seen among lay decisionmakers — is empirically testable thanks to members of the bench who participate in research studies. The resulting experimental findings suggest a complex cognitive landscape: professional judges also seem to be susceptible to motivated decisionmaking triggered by legally irrelevant information, but may be more resistant to some forms of cognitive and ideological biases compared with ordinary citizens.35 Regarding crime egregiousness and the exclusionary rule in particular, Magistrate Judge Andrew Wistrich and Professors Jeffrey Rachlinski and Chris Guthrie tested a variation of my heroin/marijuana suppression paradigm with actual judges and found that, like the lay participants in my studies, members of the bench were significantly more likely to admit challenged evidence when a case involved heroin than when it involved marijuana.36

Next, to address whether judicial rulings in real cases reflect the motivated outcomes seen among both lay people and judges in experimental settings, I am collaborating with two political scientists, Professors Jeffrey Segal and Benjamin Woodson, to test the effect of crime severity in approximately 500 actual federal appellate search-and-seizure decisions.37 As predicted, the analyses indicate that, controlling for the intrusiveness of the police search and the political ideology of the authoring judge, the probability of judges admitting challenged evidence is significantly higher in cases involving more severe crimes.

While this method of analyzing existing cases also has its potential limitations,38 including the impossibility of cleanly isolating variables in real legal disputes, such studies can work in conjunction with experimental findings to contribute different dimensions of validity.39 Arriving at consistent answers to a research question through a combination of different interdisciplinary paths — not just the ones described here, but also other approaches like case studies and interviews — can enable more nuanced and confident insights than any one methodology in isolation can provide.40 Furthermore, a “mixed methods” approach that provides converging evidence in support of a shared hypothesis can help address the above-noted concern about experimental replication.

Ideally, law-and-psychology researchers should move toward conducting more studies directly in the field: in real courtrooms, with real judges and jurors.41 If courts facilitate such access, it would in turn help promote the generation of research that is likely to be more directly applicable to real legal adjudication. Furthermore, there is a need for more empirical investigations into whether and how the dynamics of group decisionmaking, in juries or panels of judges, impact cognitive effects demonstrated at the individual level.42

Probabilistic “Facts”

Another methodological caveat to applying empirical psychology to courtroom adjudication is that the findings generated by such research cannot be regarded as “facts” in the sense of indisputable information. Generally, the answers, predictions, and understandings that social and behavioral science studies provide tend to be “complex, . . . probabilistic (that is, expressed in terms of the increased likelihood of an event occurring rather than as a certainty), . . . and subject to revision.”43 To help guard against overstated or premature legal applications of such findings, the inherently inference-based and evolving nature should be explicitly confronted when presenting the work to courts.

These features do not, however, negate the value that such studies can contribute to the legal system. In fact, the probabilistic nature of empirical findings could help identify pathways toward legal reform. For example, in the exclusionary rule studies discussed above, not all participants were equally susceptible to the motivated cognition effect; a minority of laypeople and judges suppressed tainted evidence even when faced with the more egregious crime.44 Such differences between individuals could be pursued in future research to identify whether and why particular cognitive characteristics or circumstances make some people more impervious to decisionmaking biases. This research could then be applied to guide more targeted instructional or training interventions for judges and jurors.

Additionally, notwithstanding the inherently probabilistic nature of empirical psychology, once enough evidence has been gathered and replicated on phenomena that are relevant to legal decisionmaking — through testing of verifiable hypotheses using multiple measures and methodologies to maximize validity — such work can present a compelling prima facie case for judicial consideration. For instance, the large body of research on cognitive biases in legal judgments, from which I have highlighted just a few examples here, suggests that judges should consider the likelihood of such effects in their own decisionmaking, or when instructing jurors. Furthermore, researchers and courts alike can glean lessons from the successes and setbacks of areas of empirical psychology that have already permeated and influenced the legal system, such as the extensive work on the fallibility of eyewitness testimony.45

Facilitating Access to Empirical Findings

Prospects of a more psychologically enlightened legal system have already inspired efforts toward facilitating the exchange of information between research labs and courtrooms. One advancement in this direction has been that law-and-psychology scholars are increasingly targeting their research not only toward interdisciplinary and empirical journals46 but also toward generalist law reviews — thereby augmenting the possibility of a wider network of connections to the bench and the bar.

While the production of original empirical findings is prized in the academy, legal scholars with relevant disciplinary expertise should also be encouraged to regularly conduct literature reviews and meta-analyses that synthesize and examine the fast-growing bodies of research accumulating on topics applicable to courtroom adjudication, with an emphasis on highlighting practical legal implications.47 Such efforts would not only make the research more identifiable and accessible to practitioners and judges but would also ensure critical peer oversight of the findings — for just as with any process of human cognition, there may be inadvertent biases operating in researchers’ interpretations of their own data.48 If analytical reviews of existing empirical literature are conducted independently of the facts of any particular litigation, judges may be more inclined to regard them as objectively reliable sources of information if they are later put forward to inform a legal case — including by the researchers themselves acting as amici or court-appointed expert witnesses49 — than if such analyses were specifically commissioned for the case in question.

Another positive direction for broadening productive connections between the academy and the bench has been the expansion of informal exchanges outside the courtroom, with judges participating in research colloquiums and academics presenting their research findings at judicial conferences. Critically, the communication is increasingly two-sided — with judges not only learning about potentially relevant scholarship, but also informing researchers of the types of studies they would find most helpful on the bench — through informal engagements or judicial opinions that highlight the empirical testability of questions that come before courts.50

Finally, an implication of these proposals to bring more empirical psychology to bear in the courtroom is that law schools should consider moving toward training their students on basic empirical and psychological literacy.  That way, future litigators, judicial law clerks, and judges in the system would be equipped to themselves assess data relevant to the legal cases that they try, process, or adjudicate.51 Some law schools are already offering courses or entire programs on empirical methods, statistics, and law and psychology.52  More broadly, legal instructors can also highlight applications of interdisciplinary and empirical research in mainstream doctrinal classes, including fundamental “black letter law” subjects like Contracts, Criminal Law, Evidence, and Torts — all areas in which relevant empirical psychology work has been conducted.53 To practice implementation, in-class exercises or policy questions on exams that call upon students to think critically about improving legal doctrines, practices, or outcomes could then include a component asking students to consider what kinds of empirical research would help inform their proposals.

In my experience of applying these pedagogical approaches, an added benefit is that the fresh perspectives law students offer can inspire and inform the development of empirical psychology scholarship. Thus, notwithstanding debates about whether law schools should focus more on “training lawyers for practice” versus “prioritiz[ing] academic scholarship . . . at least in part by adopting the methods of the social sciences and other disciplines,”54 the strategies proposed herein are consistent with the view that these two missions not only can “coexist . . . [as they] have for many decades,”55 but also can productively propel each other in novel directions.56

The law is replete with potentially erroneous assumptions about how the human mind works, many of which have been around for centuries and continue to operate unchecked in the legal system. Fortunately, the field of psychology’s theoretical and empirical tools for rigorously testing these assumptions are being employed with an eye toward informing and improving legal adjudication. The rapid growth and increasing accessibility of such research may call for legal checks on the judiciary’s use of it, such as notice to the parties and opportunities to rebut.57 However, with appropriate safeguards both in the scientific generation of the data and in its legal applications, empirical psychology can provide valuable insights into factual, legal, and normative dimensions of courtroom decisionmaking.


* Assistant Professor, University of California, Berkeley, School of Law. Thanks to Vincent Burnton and Carly Giffin for research assistance; to participants in Berkeley Law’s Junior Working Ideas Group for pivotal feedback on an earlier draft; and to Joshua Davis, Judge William Fletcher, Justin McCrary, Tracey Meares, Joy Milligan, Saira Mohamed, Janice Nadler, Victoria Plaut, Kevin Quinn, Jeffrey Rachlinski, Mary Rose, Andrea Roth, Dan Simon, Holger Spamann, and Michael Webster for helpful comments.

Footnotes
  1. ^ See Fed. R. Evid. 201(a) advisory committee’s note (1972 Proposed Rules) (drawing upon Kenneth Culp Davis, An Approach to Problems of Evidence in the Administrative Process, 55 Harv. L. Rev. 364 (1942)).

    Return to citation ^
  2. ^ Id.; see also John Monahan & Laurens Walker, Judicial Use of Social Science Research, 15 Law & Hum. Behav. 571, 581 (1991) (discussing use of social science research to determine adjudicative versus legislative facts, as well as “social framework” facts that include elements of both).

    Return to citation ^
  3. ^ See generally, e.g., Neil Vidmar & Valerie P. Hans, American Juries (2007); Shari Seidman Diamond et al., Damage Anchors on Real Juries, 8 J. Empirical Legal Stud. (Special Issue) 148 (2011); Chris Guthrie, Jeffrey J. Rachlinski & Andrew J. Wistrich, Inside the Judicial Mind, 86 Cornell L. Rev. 777 (2001); Rebecca Hollander-Blumoff & Tom R. Tyler, Procedural Justice in Negotiation: Procedural Fairness, Outcome Acceptance, and Integrative Potential, 33 Law & Soc. Inquiry 473 (2008); L. Song Richardson & Phillip Atiba Goff, Essay, Implicit Racial Bias in Public Defender Triage, 122 Yale L.J. 2626 (2013); Paul H. Robinson & John M. Darley, Intuitions of Justice: Implications for Criminal Law and Justice Policy, 81 S. Cal. L. Rev. 1 (2007); Tom R. Tyler, Phillip Atiba Goff & Robert J. MacCoun, The Impact of Psychological Science on Policing in the United States: Procedural Justice, Legitimacy, and Effective Law Enforcement, 16 Psychol. Sci. Pub. Int. 75 (2015); Neil Vidmar, James E. Coleman Jr. & Theresa A. Newman, Rethinking Reliance on Eyewitness Confidence, 94 Judicature 16 (2010).

    Return to citation ^
  4. ^ See generally, e.g., Michelle Wilde Anderson & Victoria C. Plaut, Property Law: Implicit Bias and the Resilience of Spatial Colorlines, in Implicit Racial Bias Across the Law 25 (Justin D. Levinson & Robert J. Smith eds., 2012); Jennifer K. Robbennolt & Valerie P. Hans, The Psychology of Tort Law (2016); Michael J. Saks & Barbara A. Spellman, The Psychological Foundations of Evidence Law (2016); Dan Simon, In Doubt: The Psychology of the Criminal Justice Process (2012); Linda Hamilton Krieger & Susan T. Fiske, Behavioral Realism in Employment Discrimination Law: Implicit Bias and Disparate Treatment, 94 Calif. L. Rev. 997 (2006); Victor D. Quintanilla & Cheryl R. Kaiser, The Same-Actor Inference of Nondiscrimination: Moral Credentialing and the Psychological and Legal Licensing of Bias, 104 Calif. L. Rev. 1 (2016); Tess Wilkinson-Ryan & Jonathan Baron, Moral Judgment and Moral Heuristics in Breach of Contract, 6 J. Empirical Legal Stud. 405 (2009).

    Return to citation ^
  5. ^ See generally, e.g., Ideology, Psychology, and Law (Jon Hanson ed., 2012); Kenworthey Bilz, Testing the Expressive Theory of Punishment, 13 J. Empirical Legal Stud. 358 (2016); Dan M. Kahan et al., “They Saw a Protest”: Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. 851 (2012); Jerry Kang et al., Implicit Bias in the Courtroom, 59 UCLA L. Rev. 1124 (2012); Janice Nadler & Mary-Hunter McDonnell, Moral Character, Motive, and the Psychology of Blame, 97 Cornell L. Rev. 255 (2012); Dan Simon, A Third View of the Black Box: Cognitive Coherence in Legal Decision Making, 71 U. Chi. L. Rev. 511 (2004); Avani Mehta Sood, Motivated Cognition in Legal Judgments — An Analytic Review, 9 Ann. Rev. L. & Soc. Sci. 307 (2013).

    Return to citation ^
  6. ^ See Mapp v. Ohio, 367 U.S. 643, 655 (1961). See generally Avani Mehta Sood, Cognitive Cleansing: Experimental Psychology and the Exclusionary Rule, 103 Geo. L.J. 1543, 1549–50 (2015).

    Return to citation ^
  7. ^ See Sood, supra note 6, at 1552; see also Nix v. Williams, 467 U.S. 431, 455 (1984) (inevitable discovery); United States v. Leon, 468 U.S. 897, 913 (1984) (good faith).

    Return to citation ^
  8. ^ See Sood, supra note 6, at 1560–64; see also Ziva Kunda, The Case for Motivated Reasoning, 108 Psychol. Bull. 480 (1990).

    Return to citation ^
  9. ^ Kunda, supra note 8, at 487; see also Charles S. Taber et al., The Motivated Processing of Political Arguments, 31 Pol. Behav. 137, 148–49 (2009).

    Return to citation ^
  10. ^ See Nix, 467 U.S. at 445.

    Return to citation ^
  11. ^ Sood, supra note 6, at 1564–80.

    Return to citation ^
  12. ^ Id.

    Return to citation ^
  13. ^ Id. at 1572. Mediation analysis is a statistical method of assessing whether an observed relationship between two variables is explained by a third variable. Reuben M. Baron & David A. Kenny, The Moderator-Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations, 51 J. Personality & Soc. Psychol. 1173, 1173 (1986).

    Return to citation ^
  14. ^ Sood, supra note 6, at 1573–74.

    Return to citation ^
  15. ^ See, e.g., Herring v. United States, 555 U.S. 135, 144 (2009) (upholding admissibility of evidence when police error does not rise to the level of “deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence”); see also Sood, supra note 6, at 1558–60, 1581–82.

    Return to citation ^
  16. ^ See Sood, supra note 6, at 1589–90.

    Return to citation ^
  17. ^ See Yale Kamisar, “Comparative Reprehensibility” and the Fourth Amendment Exclusionary Rule, 86 Mich. L. Rev. 1, 9–10 (1987) (emphasis added).

    Return to citation ^
  18. ^ See Model Penal Code and Commentaries § 5.01 cmt. 1 (Am. Law Inst. 1985); Herbert Wechsler et al., The Treatment of Inchoate Crimes in the Model Penal Code of the American Law Institute: Attempt, Solicitation, and Conspiracy, 61 Colum. L. Rev. 571, 593–95 (1961).

    Return to citation ^
  19. ^ Avani Mehta Sood, The Lay of the Law: Misunderstanding and Bias in Psychological Constructions of Criminality (Aug. 22, 2017) (unpublished manuscript) (on file with the Harvard Law School Library).

    Return to citation ^
  20. ^ See Tom R. Tyler, Procedural Justice, Legitimacy, and the Effective Rule of Law, 30 Crime & Just. 283, 350 (2003).

    Return to citation ^
  21. ^ See Avani Mehta Sood & John M. Darley, Essay, The Plasticity of Harm in the Service of Criminalization Goals, 100 Calif. L. Rev. 1313, 1342–45 (2012); Sood, supra note 5, at 320–22; see also Janice Nadler, Blaming as a Social Process: The Influence of Character and Moral Emotion on Blame, 75 Law & Contemp. Probs., no. 2, 2012, at 1, 25–26.

    Return to citation ^
  22. ^ Tom Pyszczynski & Jeff Greenberg, Toward an Integration of Cognitive and Motivational Perspectives on Social Inference: A Biased Hypothesis-Testing Model, in 20 Advances in Experimental Social Psychology 297, 302 (Leonard Berkowitz ed., 1987) (emphasis omitted).

    Return to citation ^
  23. ^ Sood, supra note 6, at 1591–99.

    Return to citation ^
  24. ^ See, e.g., Kang et al., supra note 5, at 1175–77.

    Return to citation ^
  25. ^ See Andrew E. Taslitz, Hypocrisy, Corruption, and Illegitimacy: Why Judicial Integrity Justifies the Exclusionary Rule, 10 Ohio St. J. Crim. L. 419, 459 (2013); Tyler, supra note 20, at 284.

    Return to citation ^
  26. ^ See generally Duane T. Wegener & Richard E. Petty, The Flexible Correction Model: The Role of Naive Theories of Bias in Bias Correction, in 29 Advances in Experimental Social Psychology 141 (Mark P. Zanna ed., 1997).

    Return to citation ^
  27. ^ Sood, supra note 6, at 1591–96.

    Return to citation ^
  28. ^ See, e.g., Dan M. Kahan, Culture, Cognition, and Consent: Who Perceives What, and Why, in Acquaintance-Rape Cases, 158 U. Pa. L. Rev. 729, 753 (2010); Kang et al., supra note 5, at 1183–84; Joel D. Lieberman & Jamie Arndt, Understanding the Limits of Limiting Instructions: Social Psychological Explanations for the Failures of Instructions to Disregard Pretrial Publicity and Other Inadmissible Evidence, 6 Psychol. Pub. Pol’y & L. 677, 693–703 (2000); Sood, supra note 5, at 321–22; Sood, supra note 6, at 1596–99.

    Return to citation ^
  29. ^ See Ehud Guttel, Overcorrection, 93 Geo. L.J. 241, 248–49 (2004); Sood, supra note 6, at 1598–99.

    Return to citation ^
  30. ^ See, e.g., Shari Seidman Diamond, Beth Murphy & Mary R. Rose, The “Kettleful of Law” in Real Jury Deliberations: Successes, Failures, and Next Steps, 106 Nw. U. L. Rev. 1537 (2012); Phoebe C. Ellsworth & Alan Reifman, Juror Comprehension and Public Policy: Perceived Problems and Proposed Solutions, 6 Psychol. Pub. Pol’y & L. 788 (2000); Laurence J. Severance, Edith Greene & Elizabeth F. Loftus, Toward Criminal Jury Instructions that Jurors Can Understand, 75 J. Crim. L. & Criminology 198 (1984); Peter Meijes Tiersma, Reforming the Language of Jury Instructions, 22 Hofstra L. Rev. 37 (1993).

    Return to citation ^
  31. ^ See Sood, supra note 19.

    Return to citation ^
  32. ^ Compare Open Sci. Collaboration, Estimating the Reproducibility of Psychological Science, 349 Science 943 (2015), with Daniel T. Gilbert et al., Comment on “Estimating the Reproducibility of Psychological Science,” 351 Science 1037 (2016) (responding to Open Sci. Collaboration, supra).

    Return to citation ^
  33. ^ See Craig A. Anderson & Brad J. Bushman, External Validity of “Trivial” Experiments: The Case of Laboratory Aggression, 1 Rev. Gen. Psychol. 19, 20–21 (1997).

    Return to citation ^
  34. ^ Marilynn B. Brewer, Research Design and Issues of Validity, in Handbook of Research Methods in Social and Personality Psychology 3, 12 (Harry T. Reis & Charles M. Judd eds., 2000); see also Anderson & Bushman, supra note 33, at 21–22; Brian H. Bornstein, The Ecological Validity of Jury Simulations: Is the Jury Still Out?, 23 Law & Hum. Behav. 75, 75–76 (1999); Shari Seidman Diamond, Illuminations and Shadows from Jury Simulations, 21 Law & Hum. Behav. 561, 563–67 (1997).

    Return to citation ^
  35. ^ See, e.g., Guthrie, Rachlinski & Wistrich, supra note 3, at 784; Dan M. Kahan et al., “Ideology” or “Situation Sense”? An Experimental Investigation of Motivated Reasoning and Professional Judgment, 164 U. Pa. L. Rev. 349, 350, 391–97 (2016); Richard E. Redding & N. Dickon Reppucci, Effects of Lawyers’ Socio-Political Attitudes on Their Judgments of Social Science in Legal Decision Making, 23 Law & Hum. Behav. 31, 48–49 (1999); Holger Spamann & Lars Klöhn, Justice Is Less Blind, and Less Legalistic, than We Thought: Evidence from an Experiment with Real Judges, 45 J. Legal Stud. 255, 256–59 (2016); Andrew J. Wistrich, Chris Guthrie & Jeffrey J. Rachlinski, Can Judges Ignore Inadmissible Information? The Difficulty of Deliberately Disregarding, 153 U. Pa. L. Rev. 1251, 1251–52 (2005).

    Return to citation ^
  36. ^ Andrew J. Wistrich, Jeffrey J. Rachlinski & Chris Guthrie, Heart Versus Head: Do Judges Follow the Law or Follow Their Feelings?, 93 Tex. L. Rev. 855, 892 & n.187 (2015).

    Return to citation ^
  37. ^ Jeffrey Segal, Avani Mehta Sood & Benjamin Woodson, Does Crime Severity Influence Judges in Search-and-Seizure Cases? An Empirical Triangulation of Motivated Admissibility Decisions (June 30, 2017) (unpublished manuscript) (on file with the Harvard Law School Library).

    Return to citation ^
  38. ^ See, e.g., Kahan et al., supra note 35, at 353–54, 357–63.

    Return to citation ^
  39. ^ See Wistrich, Guthrie & Rachlinski, supra note 35, at 1322; Wistrich, Rachlinski & Guthrie, supra note 36, at 900–01.

    Return to citation ^
  40. ^ See Neil Vidmar, Civil Juries in Ecological Context: Methodological Implications for Research, in Civil Juries and Civil Justice 35, 57–64 (Brian H. Bornstein et al. eds., 2008); Segal, Sood & Woodson, supra note 37.

    Return to citation ^
  41. ^ See, e.g., Bornstein, supra note 34, at 87–88; Shari Seidman Diamond et al., Juror Discussions During Civil Trials: Studying an Arizona Innovation, 45 Ariz. L. Rev. 1 (2003).

    Return to citation ^
  42. ^ Existing literature on the effects of group deliberation on legal judgments has been mixed. Compare, e.g., Harry Kalven Jr. & Hans Zeisel, The American Jury 488–89 (1966) (finding little difference between pre- and post-deliberation jury decisions), and Norbert L. Kerr, Robert J. MacCoun & Geoffrey P. Kramer, Bias in Judgment: Comparing Individuals and Groups, 103 Psychol. Rev. 687, 687 (1996) (meta-analysis of empirical literature finding “no clear or general pattern” of “conditions under which individuals are both more and less biased than groups”), with Diamond, supra note 34, at 565 (providing examples of studies indicating significant effects of deliberation), and Jeffrey Kerwin & David R. Shaffer, Mock Jurors Versus Mock Juries: The Role of Deliberations in Reactions to Inadmissible Testimony, 20 Personality & Soc. Psychol. Bull. 153 (1994) (finding mock juries who participate in deliberations are more likely to follow instructions to ignore inadmissible information than mock jurors responding individually without deliberation).

    Return to citation ^
  43. ^ Mark Costanzo & Daniel Krauss, Psychology and Law: A Cautious Alliance, in Forensic and Legal Psychology 4, 8 (2012).

    Return to citation ^
  44. ^ Sood, supra note 6, at 1582–83 (reporting that “60% of participants in the heroin condition admitted the tainted evidence and construed its lawful discovery as inevitable, which was in significant contrast to the mere 15% of participants who did so in the marijuana condition but was not a uniform response to the more egregious crime”); see also Wistrich, Rachlinski & Guthrie, supra note 36, at 892.

    Return to citation ^
  45. ^ See, e.g., Elizabeth F. Loftus, Eyewitness Testimony (7th prtg. 1996); Kenneth A. Deffenbacher et al., A Meta-Analytic Review of the Effects of High Stress on Eyewitness Memory, 28 Law & Hum. Behav. 687, 687 (2004); Elizabeth F. Loftus, 25 Years of Eyewitness Science . . . Finally Pays Off, 5 Persp. on Psychol. Sci. 556, 557 (2013); Nancy K. Steblay, Scientific Advances in Eyewitness Identification Evidence, 41 Wm. Mitchell L. Rev. 1090, 1092, 1126–28 (2015). The New Jersey judiciary, for example, has drawn upon this body of research to rethink its approach to eyewitness identification. See State v. Henderson, 27 A.3d 872, 877 (N.J. 2011) (calling into question “the current legal framework for analyzing the reliability of eyewitness identifications” based on “a vast body of scientific research about human memory” and a Special Master’s evaluation of this research, including “testimony by seven experts and . . . hundreds of scientific studies”); Supreme Court Comm. on Criminal Practice, Report of the Supreme Court Criminal Practice Committee on Revisions to the Court Rules Addressing Recording Requirements for Out-of-Court Identification Procedures and Addressing the Identification Model Charges (2012) (recommending changes to New Jersey court rules and jury instructions on eyewitness identification as called for by State v. Henderson); Press Release, New Jersey Judiciary, Supreme Court Releases Eyewitness Identification Criteria for Criminal Cases (July 19, 2012), http://web.archive.org/web/20170509092130/http://www.judiciary.state.nj.us/pressrel/2012/pr120719a.html (announcing publication of revised jury charges and court rules relating to eyewitness identification); Athan P. Papailiou, David V. Yokum & Christopher T. Robertson, The Novel New Jersey Eyewitness Instruction Induces Skepticism but Not Sensitivity, 10 PLoS ONE e0142695 (2015) (experimentally testing and providing a “mixed review of the efficacy” of New Jersey’s revised jury instructions on eyewitness testimony, id. at 16–17).

    Return to citation ^
  46. ^ See Theodore Eisenberg, Why Do Empirical Legal Scholarship?, 41 San Diego L. Rev. 1741, 1743–46 (2004).

    Return to citation ^
  47. ^ See, e.g., Kang et al., supra note 5 (reviewing psychology literature on implicit bias, demonstrating its application to civil and criminal trials, and proposing intervention strategies to counter biases among judges and jurors); Sood, supra note 5 (analyzing studies on motivated cognition in legal judgments, proposing a framework for organizing this body of work, discussing its practical legal applications, and highlighting areas that require further investigation); see also Richard H. McAdams & Thomas S. Ulen, Symposium: Empirical and Experimental Methods in Law — Introduction, 2002 U. Ill. L. Rev. 791, 798 (encouraging law reviews to publish empirical literature reviews for “understanding the state of empirical knowledge in the legal area surveyed, and especially for determining what future empirical projects would be useful in a given area”).

    Return to citation ^
  48. ^ See generally Robert J. MacCoun, Biases in the Interpretation and Use of Research Results, 49 Ann. Rev. Psychol. 259 (1998).

    Return to citation ^
  49. ^ See James R. Acker, Social Science in Supreme Court Criminal Cases and Briefs: The Actual and Potential Contribution of Social Scientists as Amici Curiae, 14 Law & Hum. Behav. 25 (1990); see also Fed. R. Evid. 706 (allowing for court-appointed expert witnesses).

    Return to citation ^
  50. ^ See, e.g., Mark W. Bennett et al., How Can Social Science Improve Judicial Decision Making?, Panel at the American Association of Law Schools Annual Meeting (Jan. 4, 2017); see also Tracey L. Meares, Three Objections to the Use of Empiricism in Criminal Law and Procedure — and Three Answers, 2002 U. Ill. L. Rev. 851, 855 & n.12 (citing Illinois v. Wardlow, 528 U.S. 119, 124–25 (2000); Martinez v. Court of Appeal of Cal., 528 U.S. 152, 164–65 (2000) (Breyer, J., concurring)).

    Return to citation ^
  51. ^ See Terry Hutchinson, Empirical Facts: A Rationale for Expanding Lawyers’ Methodological Expertise, 3 Law & Method 53 (2013).

    Return to citation ^
  52. ^ See, e.g., Robert M. Lawless, Jennifer K. Robbennolt & Thomas S. Ulen, Empirical Methods in Law (2010) (casebook instructing law students on why and how to gather, evaluate, and communicate empirical data); Eisenberg, supra note 46, at 1742 (providing examples of law school initiatives for encouraging empirical literacy).

    Return to citation ^
  53. ^ See sources cited supra note 4.

    Return to citation ^
  54. ^ Justin McCrary, Joy Milligan & James Phillips, The Ph.D. Rises in American Law Schools, 1960–2011: What Does It Mean for Legal Education?, 65 J. Legal Educ. 543, 544 (2016).

    Return to citation ^
  55. ^ Id.

    Return to citation ^
  56. ^ See id. at 570–71.

    Return to citation ^
  57. ^ See, e.g., Andrew Manuel Crespo, Systemic Facts: Toward Institutional Awareness in Criminal Courts, 129 Harv. L. Rev. 2049, 2115–16 (2016); Allison Orr Larsen, The Trouble with Amicus Facts, 100 Va. L. Rev. 1757, 1804–08 (2014); Frederick Schauer & Virginia J. Wise, Nonlegal Information and the Delegalization of Law, 29 J. Legal Stud. 495 (2000).

    Return to citation ^