Policing Book Review 132 Harv. L. Rev. 1695

Digitizing the Carceral State


Download

Many life-changing interactions between individuals and state agents in the United States today are determined by a computer-generated score.1 Government agencies at the local, state, and federal levels increasingly make automated decisions based on vast collections of digitized information about individuals and mathematical algorithms that both catalogue their past behavior and assess their risk of engaging in future conduct.2 Big data, predictive analytics, and automated decisionmaking are used in every major type of state system, including law enforcement, national security, public assistance, health care, education, and child welfare.3 The federal government has pumped billions of dollars not only into its own data reservoirs, but also into state and local efforts to digitize government operations.4

Government officials claim their expanding use of big data will improve the accuracy, efficiency, and neutrality of their decisions.5 But big data has been met by a tremendous chorus of alarm. These concerns have centered paradoxically on claims that there is both too little and too much automation. On one hand, some scholars and advocates have criticized the “digital divide” created by the unequal distribution of access to technological innovations.6 In this view, inequality in big data stems from the lack of opportunities socially disadvantaged groups have to share in its benefits. Alternatively, some experts argue that digitized tools can increase equality in access to public resources. For example, adopting online platform technologies that move away from a face-to-face model for handling legal disputes may enhance access to justice by giving more people opportunities to interact with government agencies such as state courts7 and to utilize government assistance such as legal services.8

On the other hand, numerous commentators have pointed to the dangers of state overreliance on big data. These dissenters warn that the mushrooming technological surveillance of citizens threatens to invade individuals’ privacy and erode government accountability at an unprecedented scale.9 According to this view, citizens should demand more regulation to protect their personal data and subject automated decisionmaking to greater public scrutiny.10 The European Union, for example, recently enacted a new data privacy law “designed to give individuals the right to control their own information.”11

While important, these concerns about access to and protection from big data fail to capture a critical aspect of automation’s danger to society. Government digitization is not inherently or universally beneficial or harmful. Rather, the outcomes of big data depend on the particular ideologies, aims, and methods that govern its use. In the United States today, government digitization targets marginalized groups for tracking and containment in order to exclude them from full democratic participation. The key features of the technological transformation of government decisionmaking — big data, automation, and prediction — mark a new form of managing populations that reinforces existing social hierarchies. Without attending to the ways the new state technologies implement an unjust social order, proposed reforms that focus on making them more accurate, visible, or widespread will make oppression operate more efficiently and appear more benign.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by political scientist Virginia Eubanks significantly advances our understanding of the threat to social equality posed by government use of big data by examining how it functions in public assistance programs. Based on in-depth investigations of three systems, she describes how their eligibility determinations, which are based on computerized risk assessments, constitute a modern system for regulating poor and working-class people. Eubanks systematically explores the automated eligibility system the state of Indiana adopted for its welfare services (pp. 39–83), the electronic registry of unhoused people living in Los Angeles’s Skid Row (pp. 84–126), and the statistical model used in Allegheny County, Pennsylvania, that is an adaptation of a model developed by researchers in New Zealand to score families according to 132 variables that predict future cases of child maltreatment (pp. 127–73). Each program illustrates a different aspect of high-tech shadow mechanisms for regulating the poor: they divert poor people from public resources (Indiana); classify and criminalize them (Los Angeles); and punish them based on predictions of their future behavior (Allegheny County) (pp. 179–82). Eubanks’s analysis extends beyond concerns about data privacy and access to data to unveil “the new digital infrastructure of poverty relief” constructed with high-tech monitoring tools (p. 11). Eubanks argues that government agencies are using computer technologies to “target, track, and punish” poor people in ways that divert attention from the need for social change and erode democracy for everyone (p. 178). Thus, Automating Inequality expands the literature criticizing how government use of big data reflects existing social inequalities to show how big data helps agencies structure state programs to create new punitive and antidemocratic modes of social control.

Eubanks’s investigation of digitized public welfare programs refutes dominant perspectives that view the growth of big data as both a positive and a negative development. First, Eubanks shows that agencies’ reliance on computer software to generate risk scores doesn’t make decisionmaking more objective (pp. 142, 153). The algorithms the agencies use build biases into decisionmaking processes, shielding agency determinations even more from government accountability (pp. 79, 167). Second, Eubanks finds that high-tech tools don’t radically improve state agencies’ ability to address poverty (pp. 197–200). Rather, she concludes that technological innovations reconstitute the nineteenth-century poorhouse as a modern day “digital poorhouse” (p. 12). The contemporary system is undergirded with the same ideologies that blame poor people for their disadvantaged social position but upgraded with the ability to monitor and punish them more efficiently (pp. 12, 17). Today’s digital revolution is but the latest in a history of innovations in poverty management. “[T]he new regime of data analytics is more evolution than revolution,” Eubanks writes. “It is simply an expansion and continuation of moralistic and punitive poverty management strategies that have been with us since the 1820s” (p. 37).

Finally, Eubanks’s analysis reveals that proposals to bridge the “digital divide” by assuring greater inclusion in technological progress badly miss the mark. “I found that poor and working-class women in my hometown of Troy, New York, were not ‘technology poor,’ as other scholars and policy-makers assumed,” observes Eubanks. “Data-based systems were ubiquitous in their lives . . .” (p. 8). Big data critics who decry a universal invasion of the public’s privacy make a similar mistake by failing to attend to the way state surveillance concentrates on poor people with an intensity unknown to middle-class and wealthy Americans. To tackle the government’s expanding reliance on automated analytics, we must understand it in terms of the particular ways it is structured to reinforce unjust hierarchies of power.

Eubanks’s astute interpretation of big data analytics as poverty management provides critical yet partial insight into modern day state oppression. Automating Inequality shines a needed spotlight on government assistance programs the public is more likely to view as benevolent than as punitive. The key aspects Eubanks highlights — big data collection, automated decisionmaking, and predictive analytics — also characterize expanding high-tech approaches to criminal justice.12 Taking account of both civil and criminal state surveillance systems reveals a coherent carceral form of governance that extends far beyond prisons to deal with problems caused by structural inequalities by punishing the very people suffering most from them (p. 177). In addition, all of the oppressive features Eubanks describes result from racism as much as disdain for poor people. Computerized risk assessments and determinations regulate people on the basis of race as well as economic status: “Though these new systems have the most destructive and deadly effects in low-income communities of color, they impact poor and working-class people across the color line” (p. 12).13 Her central insight, that digital systems are structured to maintain an unjust class order, applies equally to the systems’ reinforcement of white supremacy.

In this Review, I expand Eubanks’s focus on state welfare programs to include a broader range of systems, with particular attention to the criminal justice system, and Eubanks’s focus on poverty management to include white supremacy. This more comprehensive analysis illuminates how computerized prediction is fundamental to the ideology, methods, and impact of the modern mode of social control in the United States — the digitized carceral state. My analysis of the role big data, automation, and prediction play in carceral governance proceeds as follows. Part I provides a holistic portrait of the carceral state, which extends beyond prisons to encompass multiple institutions that are supposed to serve people’s needs. This punitive regime includes criminal law enforcement, education, and health care, as well as the poverty-relief and child-protection systems Eubanks describes. By examining the way the prison, foster care, and welfare systems operate together to punish black mothers in particular, I show the importance of attending to racism, along with sexism and classism, in understanding the proliferation of carceral responses to social inequality. Part II explores how automated decisionmaking works to implement carceral governance. Despite claims that computerized prediction is objective, its databases and algorithms build in unequal social structures and ideologies that create new modes of state surveillance and control in marginalized communities. Adding to Eubanks’s focus on poverty management, I argue that racism is central to the carceral state’s reliance on prediction and embedded in predictive policing. Part III concludes by advocating for an abolitionist approach to contesting the digitized carceral state. While agreeing with Eubanks’s call to dismantle the digital poorhouse rather than reform it, I argue that acknowledging racism’s crucial role in carceral governance accentuates the need for explicitly antiracist strategies to build a viable movement for change.


* George A. Weiss University Professor of Law & Sociology, Raymond Pace and Sadie Tanner Mossell Alexander Professor of Civil Rights, Professor of Africana Studies, University of Pennsylvania. Penn Law student Rachel Baker Mann provided excellent research assistance for this Review.

Footnotes
  1. ^ See, e.g., Cathy O’Neil, Weapons of Math Destruction 3–14 (2016) (explaining that computer-generated scores inform decisions about teacher evaluation, criminal sentencing, and consumer finance); Filippo A. Raso et al., Berkman Klein Ctr., Artificial Intelligence & Human Rights: Opportunities & Risks (2018), https://ssrn.com/abstract=3259344 [https://perma.cc/V446-C6JS] (discussing how artificial intelligence affects human rights in the contexts of criminal justice, credit scores, health care, human resources, and education).

    Return to citation ^
  2. ^ See generally Bernard E. Harcourt, Against Prediction (2007); Viktor Mayer-Schönberger & Kenneth Cukier, Big Data (2013); O’Neil, supra note 1; Frank Pasquale, The Black Box Society (2015).

    Return to citation ^
  3. ^ See O’Neil, supra note 1, at 134–40 (discussing models that score public school teachers’ performance); Bill Cope & Mary Kalantzis, Big Data Comes to School: Implications for Learning, Assessment, and Research, 2 AERA Open 1 (2016), https://journals.sagepub.com/doi/abs/10.1177/2332858416641907 [https://perma.cc/YA8G-WAW3]; Kevin Miller, Total Surveillance, Big Data, and Predictive Crime Technology: Privacy’s Perfect Storm, 19 J. Tech. L. & Pol’y 105, 107 (2014) (describing “total surveillance, big data analytics, and actuarial trends in policing” as a “triple threat” to privacy); Nicolas Terry, Navigating the Incoherence of Big Data Reform Proposals, 43 J.L. Med. & Ethics (Supplement) 44, 44 (2015) (“The health care industry will be a large customer of big data while predictive analytics already underlie important health care and public health initiatives.”); Damien Van Puyvelde et al., Beyond the Buzzword: Big Data and National Security Decision-Making, 93 Int’l Aff. 1397, 1398 (2017) (“In the US intelligence community big data has become institutionalized . . . .”); Press Release, White House, Office of the Press Sec’y, Fact Sheet: First Ever White House Foster Care & Technology Hackathon (May 26, 2016), https://obamawhitehouse.archives.gov/the-press-office/2016/05/26/fact-sheet-first-ever-white-house-foster-care-technology-hackathon[https://perma.cc/23M2-W2UA] (listing a number of measures the Obama Administration implemented “to increase the use of technology and improve outcomes in the foster care system”).

    Return to citation ^
  4. ^ See Sara Friedman, State Data Officers Offer Feedback on Federal Data Strategy, GCN (July 31, 2018), https://gcn.com/articles/2018/07/31/state-cdo-federal-data-strategy.aspx [https://perma.cc/E8H2-QRFJ].

    Return to citation ^
  5. ^ See Press Release, White House, supra note 3 (announcing a convening hosted by the White House, the U.S. Department of Health and Human Services, and Think of Us to “discuss ways to improve our foster care system through the use of technology”); see also William M. Grove et al., Clinical Versus Mechanical Prediction: A Meta-Analysis, 12 Psychol. Assessment 19, 19 (2000) (finding that mechanical predictions of human health and behavior “were about 10% more accurate than clinical predictions”). But see Miller, supra note 3, at 118–22 (discussing the technological and methodological limitations of predictive systems).

    Return to citation ^
  6. ^ Jan A.G.M. van Dijk, Digital Divide Research, Achievements and Shortcomings, 34 Poetics 221, 221–22 (2006).

    Return to citation ^
  7. ^ J.J. Prescott, Improving Access to Justice in State Courts with Platform Technology, 70 Vand. L. Rev. 1993, 1996–99 (2017).

    Return to citation ^
  8. ^ James E. Cabral et al., Using Technology to Enhance Access to Justice, 26 Harv. J.L. & Tech. 241, 246 (2012).

    Return to citation ^
  9. ^ See sources cited supra note 2.

    Return to citation ^
  10. ^ See O’Neil, supra note 1, at 213–14.

    Return to citation ^
  11. ^ Jacob Weisberg, The Digital Poorhouse, N.Y. Rev. Books (June 7, 2018), https://www.nybooks.com/articles/2018/06/07/algorithms-digital-poorhouse/ [https://perma.cc/HY8X-FSE7] (describing the European Union’s General Data Protection Regulation).

    Return to citation ^
  12. ^ See infra Part II, pp. 1707-21.

    Return to citation ^
  13. ^ See Safiya Noble, Algorithms of Oppression 80–104 (2018) (discussing how allegedly neutral search engines like Google discriminate against African Americans); see also Ruha Benjamin, Race After Technology (forthcoming 2019) (discussing multiple ways in which emerging technologies encode white supremacy).

    Return to citation ^