In Content Moderation as Systems Thinking,1 Professor Evelyn Douek, as the title suggests, endorses an approach to the people, rules, and processes governing online speech as one not of anecdote and doctrine but of systems thinking.2 She constructs this concept as a novel and superior understanding of the problems of online-speech governance as compared to those existent in what she calls the “standard [scholarly] picture of content moderation.”3 This standard picture of content moderation — which is roughly five years old4 — is “outdated and incomplete,” she argues.5 It is preoccupied with anecdotal, high-profile adjudications in which platforms make the right or wrong decision to take down certain speech and not focused enough on the platform’s design choices and invisible automated removal of content. It draws too heavily from First Amendment contexts, which leads to platforms assessing content moderation controversies as if they were individual judicial cases.6
Douek calls her approach “both ambitious and modest.”7 The modest part calls for structural and procedural regulatory reforms that center content moderation as “systems thinking.”8 The notion of systems thinking conveys a generalized approach of framing complexity as a whole comprised of dynamic relationships rather than the sum of segmented parts.9 The ambitious part is dismantling the standard picture of content moderation scholarship and challenging the resultant “accountability theater” created by platforms and lawmakers alike.10 In Douek’s view, it is this “stylized picture of content moderation”11 that is to blame for regulators assuming “that the primary way they can make social media platforms more publicly accountable is by requiring them to grant users ever more individual procedural rights.”12
There is much to like about understanding content moderation as a complex, dynamic, and ever-evolving system. Particularly useful for an article titled Content Moderation as Systems Thinking that calls for regulation of technology, there is rich and detailed scholarship on content moderation in both sociotechnical theory and the law. Indeed, most of the academic work on content moderation is done by sociotechnical theory scholars who study content moderation and platform governance using systems-thinking and systems-theory frameworks.13 Sociotechnical systems theory posits that an organization is best understood and improved if all parts of the system — people, procedures, norms, culture, technology, infrastructure, and outcomes — are understood as relational and interdependent parts of a complex system.14 In analyzing private law under this theoretical framework, Professor Henry Smith describes systems as “a collection of elements and — crucially — the connections between and among them; complex systems are ones in which the properties of the system as a whole are difficult to infer from the properties of the parts.”15 Examples of systems abound at all levels of nature and society: from cognition to social networks or economies, or as Smith proposes, systems of law.16
Systems thinking, then, according to those that study it, is one step removed: “literally, a system of thinking about systems.”17 This definition is, of course, tautological; even the authors of the only article Douek cites on the topic seem confused.18 But the takeaway of “systems thinking” is much the same as that described by sociotechnical theory and by Smith: an “understanding of dynamic behavior, systems structure as a cause of that behavior, and the idea of seeing systems as wholes rather than parts” — wholes that create “emergent properties” whose origins cannot be traced to any one part or interplay of the system.19 It is both the ocean and the wave, the forest and the trees, as well as all of the interactions and the emergent properties resultant.20
I would fully support and could barely disagree with such a holistic conception, especially in the context of global online speech controlled and governed by private platforms. But evaluating systems thinking as a concept is difficult because Douek never defines this new approach or engages with any of the relevant scholarship or literature, save for a single autological definition in a footnote.21 Instead, Content Moderation as Systems Thinking attempts to distinguish itself from the “standard scholarly picture” in which content moderation is “a privatized hierarchical bureaucracy that applies legislative-style rules drafted by platform policymakers to individual cases and hears appeals from those decisions.”22 Unfortunately, however, the standard-picture model of content moderation scholarship outlined by Douek simply does not exist. None of the works that Douek cites to for this model ever describe content moderation in such reductionist terms. Rather, for over two decades, online speech scholars, myself included, have consistently described private content moderation in the very same language as Douek offers: as “systems of governance”23 leveraging automated24 and “human”25 “processes”26 created by a “constellation of actors”27 who design dynamically and react “iteratively”28 to “internal and external influence,” in which freedom of speech and the First Amendment are only nominally the issues.29
This misrepresentation has impact. Because Douek does not fully engage with the depth of the scholarship that has already explored the issues she discusses, the article misdiagnoses why policymakers and popular commentators have failed to take account of the full picture of content moderation — and who is to blame. It is not “regulatory lag” driven by a misleading “standard picture” from scholars.30 Nor are discussion of First Amendment analogies or a focus on procedural due process solutions at fault for the woes or lack of regulation. By framing the future of online speech as a binary choice between old and new, Douek makes the future of online speech seem like an either-or scenario in which the “first wave” does it wrong, while a new “second wave” would get it right.31 This framing is not just evidentiarily incorrect, it exposes a serious logical flaw in the argument for a systems-theory approach. Even if the scholarship had overemphasized hierarchy and individual decisions, a systems-thinking approach would suggest that those parts would still be essential components of the very system of content moderation that Douek attempts to describe. The elements of content moderation scholarship she eschews would need to be as accurate and true as trees if one is to understand the system of the forest and the emergent properties of their interaction.
This either-or approach also threatens to undo the hard-won improvements in transparency and procedural protections that scholars and advocates have fought to put in place to protect user rights and global free expression. Rather than acknowledging the ways in which these existing accountability approaches complement a “systemic” solution to content moderation or the plethora of scholarly debate over government control of speech, Douek proposes a set of reforms that are in many cases rehashed from existing literature. On their own, these reforms are indeed modest. But the proposed means of enforcing them is not modest at all: government control through a new agency to oversee the most invisible parts of content moderation “with a view to creating more specific standards and mandates” for online speech.32
In this Response, I first detail what Content Moderation as Systems Thinking gets right about content moderation, as well as what its characterization of existing scholarship gets wrong. I then show why the fact that the article oversells its reframing of this area of scholarship matters not just as a matter of accuracy, but also because it undermines efforts to achieve the real-world accountability that Douek — and so many others — are ultimately after.
The challenges of governing online speech are indeed “systemic.” But proposing viable solutions requires more than merely describing the challenges as such, as evidenced by the fact that so many scholars already have. It requires recursive and iterative examination of one’s priors, engagement with empirical realities and scholarly theories, and exploration of markets and governments besides one’s own. In short, fixing the problems of online speech requires the very type of systems thinking which Douek names but does not employ.
I. The “Standard Picture” Straw Man
Douek starts her article by presenting what she believes to be the problem and its cause: “This Article’s central claim is that the standard picture’s focus on the treatment of individual posts is misguided and that the toolset for content moderation reform needs to be expanded beyond individual error correction.”33
There are roughly five implicit and explicit arguments that Douek makes to support this central claim:
First, a “standard picture of content moderation” exists and is primarily a result of academic scholarship.34
Second, in this standard scholarly picture, “platforms are ‘The New Governors,’ constructing governance systems similar to the offline justice system in which ‘[c]ontent moderators act in a capacity very similar to that of judges.’”35 Content moderation is a “privatized hierarchical bureaucracy that applies legislative-style rules drafted by platform policymakers to individual cases and hears appeals from those decisions.”36
Third, the scholarly standard picture is inaccurate because it has “blind spots” that it fails to acknowledge: “the wide diversity of institutions involved in content moderation outside the hierarchical bureaucracy that is the content moderation appeals system, and the wide variety of ex ante tradeoffs that content moderation institutional designers have to engage with.”37
Fourth, the scholarly standard picture is also inaccurate because it “is pervaded by First Amendment analogies.”38 This mistaken assumption is exemplified by how “content moderation is almost singularly concerned with the binary decision to take down or leave up individual pieces of content”39 — the “high-profile content moderation controversies” like Nancy Pelosi looking drunk, Donald Trump being banned from Twitter, users denying the Holocaust, or the like.40
Finally, this misleading and incomplete scholarly standard picture is what “leads regulators to assume that the primary way they can make social media platforms more publicly accountable is by requiring them to grant users ever more individual procedural rights.”41
This Part takes these five issues in turn. Section A addresses the first of these claims, which is a question of construction. The “standard picture of content moderation” is a term and concept created by Douek, who defines it in a footnote reference to just eight academic works.42 What makes these eight articles and books exemplary of the standard picture is not clear; the footnote omits mention of huge amounts of relevant influential scholarship and never provides reasoning or methodology to explain its construction. Section B addresses the second, third, and fourth claims, which are substantive. Douek quotes narrowly from the literature she cites for the standard picture, and undercredits the works as a result. Section C addresses the fifth part of Douek’s claim, which is causal. Douek does not adequately support the claim that the scholarly standard picture of content moderation is at fault for lawmakers’ faulty attempts at regulation. Indeed, as Douek seems to recognize, blame for lawmakers’ preoccupation with individual “high-profile” content moderation controversies is better placed on the media or lawmakers themselves.43
A. Constructing the Standard Picture
In the beginning of Content Moderation as Systems Thinking, Douek introduces the “standard picture” of content moderation scholarship.44 Though she acknowledges it is “by no means [a] comprehensive” list, her citation references only eight scholarly works.45
Why reference these eight pieces — and not any of the hundreds of other books and articles published in the last decade on content moderation? It is not clear. The years of publication of the books and articles Douek cites range from 2012 to 2021, and many other articles and books on content moderation were published in the same window — as well as in the decade before. The cited pieces vary across disciplines, ranging from political science and communications studies books to law review and social science articles. They also vary in measures of apparent influence (partial as these measures may be). Several have been widely cited and downloaded, including Rebecca MacKinnon’s Consent of the Networked,46 Professor Tarleton Gillespie’s Custodians of the Internet,47 Professor David Kaye’s Speech Police,48 and my own The New Governors.49 Others have only a few peer citations and fewer than a hundred downloads.50 Many of the cited pieces were written by scholars early in their careers or working at nonacademic institutions.51 Yet many pieces that have been more frequently cited52 or were written by high-profile scholars in the field53 are not referenced as comprising the standard picture.54
Indeed, the mystery of the scholarship that is left out from the standard picture is perhaps even more perplexing than what is left in. Douek cites to many of these additional scholarly sources in the second half of her article, but does so in support of her thesis, rather than as providing examples of the antagonist standard picture.55 Much of this scholarship — the standard picture and its omissions — hardly differs in its descriptive or normative conclusions around content moderation.
There might be good reasons why Douek thinks these eight pieces of scholarship represent a “standard” picture while the scholarship she cites later does not. But absent any methodology or theory to explain it, the scholarship included and omitted from the standard picture is at best an arbitrary grouping.
B. Characterizing the Standard Picture
Douek’s foundational argument is that the standard picture of content moderation scholarship has “blind spots” and “mistaken assumptions.”56 It is overly focused on “paradigm cases.”57 It fails to acknowledge that “[c]ontent moderation bureaucracies are a ‘they’ not an ‘it’ . . . made up of a sprawling array of actors and institutions, each of which has different functions and goals.”58 It neglects the “wide diversity of institutions . . . outside the hierarchical bureaucracy” of platform content moderation.59 It ignores automatic “ex ante tradeoffs.”60 It “assumes the necessity of a model of speech governance and the judicial role adapted from the First Amendment context” and does not adequately grapple with the degree to which “[e]x [p]ost [r]eview [c]an [b]e [s]ystemic.”61
But all of these things purportedly missing from the “standard picture” are in fact not missing at all. Indeed, the scholarly sources cited in reference to the standard picture — and many others that go unmentioned — address these supposedly absent points, often multiple times and often in the very paragraphs and pages to which Douek cites. Moreover, many of these sources already describe content moderation in the very terms of systems theory.
As one example, take Douek’s characterization of MacKinnon’s 2012 book Consent of the Networked, a foundational 294-page study in internet policy and geopolitical power. Douek implies that MacKinnon misses the systems part of content moderation and thus is an example of the standard picture, summarizing MacKinnon’s book in a footnote parenthetical as “describing the platform staff that develop policy and review procedures and ‘play the roles of lawmakers, judge, jury, and police all at the same time.’”62 MacKinnon does in fact describe platform policy teams in this way, but here is the surrounding language around the quotation (emphasized for clarity) that Douek uses:
Thus a big part of the team’s job is to develop processes to identify abusive content and remove it, while not removing other postings or pages that may be edgy and upsetting to some but are not actually against the terms of service. They have developed a system that combines automated software to identify image patterns, keywords, and communication patterns that tend to accompany abusive speech, along with review procedures by flesh-and-blood human staff. Willner[, a Facebook policy lead,] focuses on defining policy for the site: guidelines about exactly what people should or shouldn’t be allowed to do under what circumstances, and procedures for how violations are handled. These friendly and intelligent, young, blue jeans-wearing Californians play the roles of lawmakers, judge, jury, and police all at the same time. They operate a kind of private sovereignty in cyberspace.63
In this full excerpt, and in so much of her groundbreaking book, MacKinnon describes content moderation in the very words that Douek claims are absent from the standard picture.
Of course, any given work of scholarship argues and demonstrates much more than the single clause or quote to which it is reduced. But Content Moderation as Systems Thinking goes beyond reduction. This section takes the three substantive assertions in Douek’s thesis in turn, comparing them with the text of the standard-picture scholarship and adding relevant citations from omitted scholarship.
1. The Standard Picture Sees Content Moderation Like a Real-World Government with Individual Adjudications, Bureaucracy, and Legislative Sessions. — In the standard scholarly picture of Content Moderation as Systems Thinking, “platforms are ‘The New Governors,’ constructing governance systems similar to the offline justice system in which ‘[c]ontent moderators act in a capacity very similar to that of judges.’”64 The standard picture focuses overly on “individual posts” and assumes mistakenly that content moderation is only a “privatized hierarchical bureaucracy that applies legislative-style rules drafted by platform policymakers to individual cases and hears appeals from those decisions.”65
Douek’s use in quotes of “The New Governors” is a reference to my work of the same name, which I published in this Review in 2018.66 In the early days of content moderation, many of the empirical intricacies of how and why private companies moderated speech on their platforms were a mystery.67 Over three years, I interviewed former and current employees at large speech platforms, talked to members of civil society, and explored the existing literature.68 A few things became clear: First, content moderation was nothing like the notice-and-takedown regime mandated by copyright law; instead, a much more complicated system was in place.69 Second, though the substantive issues were distinct, the processes and systems that speech platforms were employing at scale were highly analogous to what my colleague Professor Rory Van Loo had characterized in the consumer law context in his article The Corporation as Courthouse.70
But Van Loo’s comparison had limits, in part because content moderation wasn’t about playing corporate middleman in buyer-seller contract disputes in the shadow of mandatory arbitration agreements. While particular commercial contexts certainly overlapped with speech platforms, the issues involved in content moderation had arguably higher stakes. As I described at the time, governing speech implicated human democratic participation, liberty, free expression, access to information, and community — but it also implicated child sexual abuse material, harassment, terrorism, fraud, hate speech, and misinformation.71 Designing a system to deal with such trade-offs at a global scale in turn required infrastructure, processes, rules, people, and systems with complex motivations and influences.72 I was far from the first or only person to see it this way,73 but my article added qualitative description and a theory of new governance at a moment where private control of public speech and its import to democracy became suddenly and massively visible.74
Over the course of seventy-three pages, The New Governors describes the multiple, and at times conflicting, motivations of private platforms to actively govern users’ speech. It focuses on three of the largest user-generated content and speech platforms, all American companies, and describes the unique conditions of U.S. law that allowed for this self-regulation.75 It details the dynamic system of ex ante automatic and ex post manual content moderation built over more than a decade.76 It describes how the people, rules, and processes of that system are constantly changing in response to pluralistic systems of external influence from government, media, civil society, and individual users.77 Its title and framing draw from Professor Jody Freeman’s work, among others, in the “New Governance” movement that “proposes a conception of governance as a set of negotiated relationships between public and private actors.”78 It explicitly eschews First Amendment analogies and urges regulators to look at content moderation as a complex and iterative “system of governance.”79 None of this is included in Douek’s summary of my article in a footnote parenthetical: “[D]escribing the three-tier structure of content moderation at Facebook.”80
Other summations of the standard picture are also misleadingly reduced. Over the course of 214 pages, Tarleton Gillespie’s 2018 book Custodians of the Internet describes social media platforms’ content moderation as “functioning technical and institutional systems — sometimes fading into the background, sometimes becoming a vexing point of contention between users and platform.”81 But Douek samples only one line from the one chapter in which Gillespie describes just one part of content moderation as emblematic of the dominant standard picture overly focused on individual ex post manual content decisions.82
Content Moderation as Systems Thinking repeatedly mischaracterizes the scholarship’s empirical observations as normative arguments. It was not Professor David Kaye, for example, who characterized Facebook’s policy process as a “mini legislative session” but a Facebook employee.83 Douek also uses a quote from Professor Kyle Langvardt’s article Can the First Amendment Scale? as evidence of the standard picture viewpoint:
Legal culture’s reflexive answer to these kinds of problems . . . is to require “some kind of a hearing.” The “hearing” may include confrontation rights, protective burdens of proof and production, opportunities for appeal, and so on . . . . Many proposals to regulate or reform platform content moderation endorse this basic strategy, usually in combination with new transparency requirements.84
Langvardt himself is not advocating for this approach, but merely stating that such an approach exists. Indeed, in the very same passage he expressly acknowledges that “those [ex post] tools also have their limits,”85 largely because individual challenges to removal decisions will not “translate to anything systemic.”86 Langvardt’s point is exactly the opposite for which he is cited and in fact, makes the very same argument that Douek claims as part of her novel thesis.
2. The Standard Picture Misses the Trade-Offs, Outside Influence, and Automatic Side of Content Moderation. — Content Moderation as Systems Thinking argues that the standard-picture scholarship ignores “the wide variety of ex ante trade-offs that content moderation institutional designers have to engage with.”87 It does not understand that “content moderation bureaucracies are a ‘they’ not an ‘it’:” composed of a “wide diversity of institutions involved in content moderation outside the hierarchical bureaucracy that is the content moderation appeals system.”88
Automatic and ex ante content moderation have always been part of the scholarly content moderation conversation. New Governors reorganized and restructured a taxonomy created by Professor James Grimmelmann in his formative work The Virtues of Moderation.89 Grimmelmann’s piece was published in 2015 in the Yale Journal of Law and Technology.90 Douek cites to it in her second footnote,91 but somehow it is not part of the standard picture92 despite the fact that Grimmelmann describes moderation in the following terms: “[M]oderation can be carried out manually, by human moderators making individualized decisions in specific cases, or automatically, by algorithms making uniform decisions in every case matching a specified pattern.”93
Grimmelmann describes much of this automatic moderation as “ex ante” because it happens before publication.94 In updating Grimmelmann’s taxonomy in New Governors, I added an important description: “The vast majority” of content moderation, I wrote in 2018, “is an automatic process run largely through algorithmic screening without the active use of human decisionmaking.”95
This was an important distinction to make because at the time, and still today, people were largely unaware of two huge parts of their online lives: one, that content moderation was happening at all; and two, that if it was happening, humans were involved. To the former, the fact that ex ante automatic content moderation stopped content from ever appearing on another user’s Facebook feed had different implications for speech (think prior restraint) and the system of speech governance than ex post reactive manual content moderation had.96 The adjective “reactive” in this description spoke to the platform reacting to users flagging problematic “ex post” (published) content, while “manual” referred to the human content moderator who would then look at the flagged content and decide whether to remove it from the site.97 In her book Behind the Screen: Content Moderation in the Shadows of Social Media, Professor Sarah T. Roberts describes the moment when she first discovered from a 2010 New York Times article that humans were doing this review:
I forwarded the article to a number of friends, colleagues and professors, all longtime internet users like me, and digital media and internet scholars themselves. “Have you heard of this job?” I asked. “Do you know anything about this kind of work?” None of them had . . . . They, too, were transfixed.98
Even eight years after the New York Times article and Roberts’s revelation, there was relatively little awareness about how content moderation worked or that there were humans in the loop. Many individuals simply thought that “computers” adjudicated content,99 somehow able to grok, for example, the invisible element of user intent that makes a picture of a topless woman posted as protest different from a picture of a topless woman posted as pornography. New Governors’ description of ex ante automatic content moderation focused on the proliferation of the use of “hashing” to check a known universe of banned content against something that is uploaded, rather than the use of photo recognition or natural language bans.100 But it also described “ex post reactive manual” content moderation — humans posting, humans flagging those posts, and humans reviewing for violations — and how that system iterated on itself over time but also sent signals back to the ex ante system so that that automatic process regularly changed.101
Whether automatic or human, content moderation considerations necessarily required “trade-offs” — between how proactive a platform was in removing content, how to select the revisions of “standards to rules” it enforced,102 and how much it relied on “automatic ex ante”103 versus “ex post reactive manual”104 content moderation done by individuals.105 Perhaps the best and most recent description of these tradeoffs in ex ante content moderation comes in Professor Hannah Bloch-Wehba’s article, Automation in Moderation, published in the Cornell International Law Journal in 2020.106 It is worth noting that in a few years since publication, the paper has been widely read,107 yet it is not included as part of the “standard picture” of content moderation. Bloch-Wehba surveys the scholarly history of automatic content moderation and describes the current state of technology. Her normative takeaway is powerful and clear: “[N]ew automation techniques exacerbate existing risks to free speech and user privacy, and create new sources of information . . . for surveillance, raising concerns about free association, religious freedom, and racial profiling . . . [and] worsens transparency and accountability deficits.”108
Grimmelmann’s, Roberts’s, Bloch-Wehba’s, and my own work are not alone in describing content moderation not just as individual posts but also as a complex mix of both ex ante and ex post content adjudication involving difficult tradeoffs. For example, Gillespie’s Custodians of the Internet describes the processes of both ex post and ex ante moderation throughout the work.109 So too does Kaye, at the outset of Speech Police:
The enormous volume of uploaded content requires that the company rely on two tools to surface potentially problematic or illegal content: humans who comb through and report content, and algorithmic automation, or Artificial Intelligence. Ideally, flagged content would undergo human evaluation before it is taken down, whether it results from human or algorithmic flagging. But that’s not always the case. Both human and algorithmic flagging can lead to mistaken deletions or blockings, or ones that activists or governments may simply disagree with.110
In 2019, a terrorist live streamed the shooting of a mosque in Christchurch, New Zealand, on Facebook.111 Though the initial video post was removed relatively quickly from the platform, it had been captured by trolls on notorious sites like 8chan.112 Despite Facebook having added a hash for the video to its automatic database so it couldn’t be reposted, for days after the tragedy, trolls uploaded copies of the live steam manipulated to get past automated ex ante detection and reappear on the platform.113 In a piece for the New Yorker following the attack, I described the global teams of individuals that worked around the clock to chase and take down the video — and ultimately devise a new system of hashing and automated behavioral identification that couldn’t be manipulated by such trolls.114
Though not included in Content Moderation as Systems Thinking, there is a plethora of scholarship that specifically discusses how the rules applied by these automatic or manual processes are created, changed, or eliminated through a global system of engineers, policymakers, activists, platform managers, and many others. I documented this “pluralistic system of influence”115 from government, media, civil society, and individuals, but especially the influence of government and platform cooperation, in section III.C of New Governors.116 This was also a central thesis of Professors Jack Goldsmith and Tim Wu’s early book, Who Controls the Internet?, which predicted, accurately, how governments would come to exercise geopolitical power through control and lobbying of internet stakeholders.117 MacKinnon’s Consent of the Networked is almost entirely devoted to the development of this online balance of power between governments and platforms — as the book’s tacit reference to John Locke’s formulation canonized in the United States Declaration of Independence suggests.118 MacKinnon also spends much of her time on the development of multistakeholder solutions to these problems, facilitated by international law.119 This is a theme reexamined by Kaye’s Speech Police, which updates MacKinnon’s formulations with modern examples from the international human rights perspective.120
Delegated decisionmaking is also discussed by the standard picture scholarship. Informal relationships between third-party experts and platforms is talked about in New Governors,121 while I documented the set up and influence of Facebook’s Oversight Board in 2020 in the Yale Law Journal.122 Most notably, in her recent essay for the Harvard Law Review Forum, Facebook’s Faces, Chinmayi Arun adeptly discusses the complicated and dynamic relationship between individuals inside the platforms and individuals outside.123 “Facebook engages with states and publics through multiple parallel regulatory conversations, further complicated by the fact that Facebook itself is not a monolith,” Arun writes.124 “Facebook has many faces — different teams working towards different goals, and engaging with different ministries, institutions, scholars, and civil society organizations. It is also internally complicated, with staff whose sympathies and powers vary and can be at odds with each other. Content moderation takes place within this ecosystem.”125
3. The Standard Picture Is Preoccupied with First Amendment Analogy. — Finally, Content Moderation as Systems Thinking argues that the standard picture “assumes the necessity of a model of speech governance and the judicial role adapted from the First Amendment context”126 and is “pervaded by First Amendment analogies.”127 While content moderation scholarship certainly argues that First Amendment principles have had an implicit and normative role in shaping content moderation systems, it is inaccurate to describe it as frequently dominated by First Amendment analogies.
Douek’s primary citation for this claim is to New Governors, but the relevant text from the pages she cites states exactly the opposite of her assertion. From New Governors:
Though they might not have “directly imported First Amendment doctrine,” the normative background in free speech had a direct impact on how they structured their policies. Wong, Hoffman, and Willner all described being acutely aware of their predisposition to American democratic culture, which put a large emphasis on free speech and American cultural norms. Simultaneously, there were complicated implications in trying to implement those American democratic cultural norms within a global company.128
This is not the only point at which I explicitly eschew First Amendment analogy as the standard for understanding private content moderation: I do so five times throughout the article, including in the abstract and introduction. The following excerpts are all from New Governors:
- “This Article argues that to best understand online speech, we must abandon traditional doctrinal and regulatory analogies and understand these private content platforms as systems of governance.”129
- “[T]his Article argues that analogy purely under First Amendment doctrine should be largely abandoned.”130
- “The law reasons by analogy, yet none of these analogies to private moderation of the public right of speech seem to precisely meet the descriptive nature of what online platforms are, or the normative results of what we want them to be.”131
- “Thinking of online platforms from within the categories already established in First Amendment jurisprudence — as company towns, broadcasters, or editors — misses much of what is actually happening in these private spaces.”132
Nor am I alone in repeatedly and categorically denying the applicability of First Amendment analogies to online speech governance, though I am the only one cited by Douek. In Speech Police, Kaye disavows the views of “American legislators and policymakers [who] . . . are constitutionally myopic in their rigid understanding and politicization of First Amendment values.”133 Outside the standard picture, Professor Jack Balkin writes in Free Speech Is a Triangle, published in the Columbia Law Review in 2019, that “the best alternative to this autocracy is not the imposition of First Amendment doctrines by analogy to the public forum or the company town.”134
C. Blaming the Standard Picture
The final part of Content Moderation as Systems Thinking’s central claim is that the standard scholarly picture “leads regulators to assume that the primary way they can make social media platforms more publicly accountable is by requiring them to grant users ever more individual procedural rights.”135
Even if a cohesive standard picture of content moderation scholarship exists, Content Moderation as Systems Thinking never offers any evidence that it is the scholarship that has led to lawmakers’ incomplete understanding of online speech or flawed regulatory proposals. Indeed, the very words describing the standard picture’s focus on “paradigm cases”136 as “high-profile content moderation controversies”137 that “dominate media headlines”138 suggest that such emphasis is due to the media’s construction, not scholarship’s.139 It seems nonsensical to suggest a small cohort of interdisciplinary academics are to blame for lawmakers’ obsession with individual speech cases, rather than the press or lawmakers themselves.140 Arguing that the media has presented an oversimplified version of online speech and content moderation would have been a far more accurate, albeit narrower, claim. Douek perhaps realizes this: despite leveling the blame solely at scholars, only roughly half of the citations in her footnote describing the standard picture are to academic sources, and the remainder are reports or media coverage.141
II. Why It Matters
Despite my criticism of Content Moderation as Systems Thinking, I do not at all disagree with the overall theory it proposes. Nor do I take issue with the idea that content moderation should be seen systemically, focused on “wholes and interrelationships rather than parts.”142 I agree that content moderation platforms are much more than post-by-post decisionmaking but instead complex and dynamic systems. And I agree that offline models of adjudication and the First Amendment provide a poor framework for understanding how online speech platforms work. I agree that content moderation — truly, all line drawing around speech — is full of tradeoffs and that perfection is impossible. Indeed, it would be hypocritical of me not to agree, because I and so many of the people I admire in this field have said so much of this before. But ultimately, the main reason that I contest the construction and characterization of a “standard picture” of content moderation is that it risks serving as a misleading premise to a shortsighted set of reforms.
A. Government-Mandated Transparency and Process Cannot Solve the Problem of Transparency and Process Theater
The central harm of the standard picture and its influence, Douek claims, is that content moderation reform has overly focused on transparency reports, individual procedural rights, and individual content appeals.143 Though the blame is misplaced, the critique is valid. Individual content decisions are imperfect mechanisms for signaling representative change back to the system, and at scale they are often inadequate remedies for users, coming too late and offering too little. The result is not meaningful changes and accountability, she argues, but simply the performance of accountability — “process theater”144 or in the case of transparency reports, “transparency theater.”145
An example of the worst of these performances is the Facebook Oversight Board (FOB), the independent adjudicator set up by Meta in 2020 to hear content appeals and issue decisions. “The Board’s procedural expectations of Facebook epitomize the individual rights paradigm — a focus on providing notice, reasons, and an individual appeal to a human in every case,” Douek writes.146 This approach, she claims, is full of “futility and failures”147 that miss aggregate harms, broken AI, and operational mistakes.148
This might well be true, but it is hard to understand how emphasizing individual rights has caused this to be the case, or how dismantling such processes will solve it. Moreover, it would seem that a systemic solution like the one Douek proposes would take both into account, allowing the Oversight Board to be a dynamic solution for content moderation reform, not a panacea. I have said as much in my prior writing — and, somewhat confusingly, so has Douek. The FOB “will not solve all our problems with social media,” she acknowledged in 2020, listing the problems the Board cannot address such as AI bias and independent researcher access.149 But despite these shortcomings, she argues, the Board has an important role to play:
Currently, some of the most consequential decisions about the way information flows through society occur behind closed doors with minimal public justification and in a way that is influenced by business imperatives. This is at odds with how essentially every jurisdiction with free speech traditionally thinks about it, which is that any restrictions on speech should be specified clearly in advance, applied consistently, and subject to careful scrutiny. This is the check that the FOB can bring to Facebook’s content moderation ecosystem.150
It is hard to square this acknowledgement and countless other writings by Douek that praise the Board for bringing some amount of transparency and process to content moderation with the final section of Content Moderation as Systems Thinking. Nor is it clear how the reforms that the article proposes instead would escape these problems of “performance” or “theater.” The modest proposals in the final Part of the article include structural and procedural requirements like annual content moderation plans and compliance reports, quality assurance, and audits to be performed by government agencies.151 I will return to the solution of government agencies as enforcement in a moment, but as a foundational matter simply adding more transparency is not a solution to performative transparency or theater. Indeed, the opposite has been proven true. This is the “transparency paradox” as Professor Ethan Bernstein coined and empirically demonstrated, in which increasing the size and salience of an audience paradoxically reduces sincerity and heightens performance. “Analogously, increasing observability in a factory may in fact reduce transparency, which is displaced by illusory transparency and a myth of learning and control, by triggering increasingly hard-to-detect hiding behavior,” writes Bernstein.152 This does not mean that there is no value in transparency, or that such attempts should be abandoned, but it does mean that many of Douek’s suggested reforms might indeed only serve to heighten the very process and transparency “theater” she critiques, rather than resolve them.
B. Speech Is Legally and Phenomenologically Special — And It Should Be
Critiquing, even condemning, past reform efforts would not be problematic if they were not presented in false dichotomy with Douek’s own solutions for reform and if some of her reforms weren’t so potentially dangerous to democracy. Almost all the proposals in Part IV of Content Moderation as Systems Thinking have been previously proposed and are as modest as she suggests. The unique element among those reforms is that they be enforced by a government administrative agency.
To square this prescription of an administrative agency for content moderation with the First Amendment, Douek argues that “[s]peech [i]s [n]ot [s]o [s]pecial.”153 This framing — “must speech be special?” — is borrowed from Professor Frederick Schauer’s work of that name, but critically it neglects to mention that Schauer’s titular question is not rhetorical. (Indeed, after a formal logic analysis, he concludes the opposite of what Douek suggests: yes, speech must and should be special.154) Instead, her argument centers on the idea that speech need not be special because it also can be commercial in nature. “[M]any canonical content moderation controversies are about commercial interests,”155 she writes, referencing controversies around Nazi memorabilia sold on Yahoo!, or eBay delisting Dr. Seuss books,156 “but they get framed as ‘speech’ cases, making the ‘censored’ party’s grievance seem weightier. In a sense, every content moderation decision is commercial: private platforms are profit-driven entities that moderate because it is in their business interests. But . . . speech!”157
But that commercial interests also exist alongside speech interests in content moderation hardly seems wholly damning of the unique place for speech in the law and democracy, generally. Books and newspapers are sold and published by profit-driven entities, and not only are they considered speech, the institutions that produce them have their own First Amendment protections.158 Nor is the framing as speech cases necessarily an indictment of speech as a special category, so much as skilled lawyering. Indeed, Douek’s complaint seems to be more with uneducated journalists and Americans than with the law itself: “On the current state of the law, there is not even a colorable First Amendment claim against platforms for restricting users’ speech. Yet cries of ‘First Amendment!’ or ‘Free Speech!’ abound when they do.”159 That many are not aware that the First Amendment only applies to government restriction of speech and not any restriction of speech seems more a failure of civic education, than an indictment of a near-universally agreed-upon human right.
It is not entirely clear why Douek bothers arguing that speech is not special until you understand that this in some sense a paper about administrative law that is arguing for administrative law solutions. Any chance at such heavy-handed government regulatory reform over private speech rights in content moderation necessitates arguing that perhaps speech isn’t so special and the First Amendment shouldn’t prevent such a reform. Without arguing that speech isn’t special, the administrative agency solution of Part IV of Content Moderation as Systems Thinking is even less feasible than it otherwise would be.
Finally, it is paradoxical after an entire paper lamenting the impossible tradeoffs and arguing that perfection is impossible in content moderation that Douek argues for administrative agency oversight not just to police the reforms she proposes, but to assure content moderation “quality.”160 Douek admits that the idea of quality is a “deeply contested concept” and briefly lists several diverse factors that could possibly measure quality in content moderation.161 Her article ends with the assertion that “[t]he only thing worse than trying to define ‘quality’ is not trying.”162
I am not so sure. It would seem like one of the worst things you could do for democracy is to give a government agency blank-check authority to enforce an undefined standard like “quality” over its citizens’ speech. This is all the truer when such government control would include “creating more specific standards and mandates in the future” for ex ante content moderation — the most invisible, and therefore potentially censorial, area of speech governance.163
Accuracy in representing scholarship and thinking through the consequences of potential reform matters, because solving the problem of online content moderation is not an academic question and it is not a game. It is a very real problem with real-world consequences across almost every dimension of global society. Changes in U.S. law or policy around online speech will have dramatic effect far outside the United States’s borders. Speech, particularly speech published in this law review, can have great impact and significance on the world.
It is not enough to just generally describe some things as “systemic,” and others as not, to take a systems-thinking approach. Systems thinking is a dynamic and powerful tool of description to understand complex phenomena. There is no wonder it is perhaps best known in understanding biological ecosystems. It is both the ocean and the wave, the fish below and the boat above.164
To best understand content moderation as systems thinking, one would have to accede that content moderation contains individual decisions, automations, governance, governments, external influence, internal politics, constitutions, norms, legality, human judgment and biases, administration, bureaucracy, multistep processes, long legislative-like meetings, people, corporate courthouses, actual courthouses, stakeholders, economies, the media, and iterative dynamic changes. To understand content moderation as systems thinking, one would have to rely on the long history of scholarship that lays each of these elements and systems out. And in doing so, one would have to acknowledge the vastness of the ocean and the insignificance of a single wave.
* Associate Professor of Law, St. John’s Law School.