Facebook founder Mark Zuckerberg has created a private “Supreme Court,” or so he says. Since 2021, his company’s Oversight Board has issued verdicts on a smattering of Facebook’s decisions about online speech. Cynics frame the Board as a Potemkin village, but defenders invoke analogies to separation of powers to claim that this new body empowers the public and restrains the company. Some are even calling for a single “platform supreme court” to rule over the entire industry.
Juridical discourse for platforms is powerful, but it can also be deceptive. This Response explores how juridical discourse has legitimized and empowered Facebook’s Board, building on Professor Evelyn Douek’s critique of how a “stylized” picture likens content moderation to judicial review. While Douek focuses on how scholars and lawmakers preach this misleading picture, I expose how platforms drive juridical discourse for their own gain. This deeper understanding of platform complicity is key. Without it, we’ll struggle to comprehend or contest the illusory picture of content moderation favored by platforms. With it, we might better resist platforms’ attempts to thwart regulation that would better serve the public’s interests.
Introduction
Love it or hate it, Facebook’s fledgling Oversight Board is poised to usher in a new era of content moderation for online platforms.1 The Board, launched two years ago to mixed fanfare and disdain, will review a smidgen of the company’s decisions about what may be shared on Facebook and Instagram. Discourse about the Board from both Facebook insiders and outsiders frequently invokes traditional governance concepts of separation of powers and judicial independence.2 Indeed, as Professor Evelyn Douek writes in her recent article, the Board “exemplifies” how standard accounts of content moderation can lead to “judicial review–style solutions — and platforms’ encouragement of this framing.”3 In this analogy, platforms have acted as legislatures by making rules, as executives by enforcing them, and as judiciaries by resolving ensuing disputes. But external oversight via the new Board purportedly devolves part of that adjudicatory power to a new “quasi-judicial” external entity intended to provide process, transparency, and impartiality.4 Facebook’s embrace of judicial analogies is no accident. The company’s Chief Executive Officer Mark Zuckerberg has compared Facebook’s overseers to a “Supreme Court,”5 and the Board’s “Charter” outlines how its “cases” will have “precedential value.”6 People inside and outside Facebook use juridical discourse to claim that the platform is ceding power through external oversight.7
In the Board’s shadow, momentum is also growing to create a cross-platform body to oversee other companies, not just Facebook.8 There’s a surprising discursive harmony between platform insiders and outsiders in advancing such proposals. Scholars and activists propose that a “platform supreme court”9 or “Social Media Council”10 could supervise the entire industry, rule on inter-platform controversies, and establish common procedures and standards. Legislators, meanwhile, are toying with laws to encourage or mandate centralized oversight and governance through external bodies and uniform standards. Facebook’s management has enthusiastically endorsed this trend, with Zuckerberg and his team openly hoping the Board will expand to “include more companies across the industry”11 and provide uniformity as “an industry-wide body.”12
In Content Moderation as Systems Thinking, Douek challenges a “stylized” or “standard” picture of content moderation that props up these recent trends in platform governance.13 Though she addresses the broader landscape of content moderation,14 her article’s insights are essential to understanding the genesis and trajectory of Facebook’s Board.
Douek’s central claim is that a “misleading and incomplete” picture has dominated regulatory and academic discussion of platform gover-nance.15 This stylized picture depicts content moderation as a “rough online analog of offline judicial adjudication of speech rights”16 whereby each platform applies “legislative-style rules drafted by platform policymakers to individual cases and hears appeals from those decisions.”17 As a result of this narrative, much discussion about platform governance obsesses over “paradigm cases involving ‘a platform’s review of user-generated content posted on its site and the corresponding decision to keep it up or take it down.’”18 Moreover, the “range of remedies is limited: the original decision is affirmed or reversed.”19
As Douek shows, this stylized picture misleadingly evokes a “day-in-court ideal” of content moderation by suggesting that “individual utterances get carefully measured against lofty speech rules and principles.”20 The reality is quite different — and more complex — in part because the “scale and speed” of online speech means that content moderation goes far beyond the aggregation of many “individual adjudications.”21 The stylized picture ignores these dynamics. Instead, it “invokes analogies to the practice of offline constitutional law,” such that key questions of content moderation “resemble those raised in First Amendment cases”22 and can be answered by developing “a body of precedent”23 and constructing “governance systems similar to the offline justice system.”24
The stylized picture isn’t just abstract theory. Lawmakers, Douek explains, have channeled this picture, seeking to hold platforms accountable and correct errors by mandating “individual ex post review,”25 an “appeal” to an “independent” arbiter,26 “reasons” for adverse decisions,27 and “ever more due process rights.”28 Some platforms have touted their own voluntary efforts to give users these “rule-of-law” goodies, reaching for the stylized narrative to pat themselves on the back.29 These trends carry prospective risks, as lawmakers seem poised to “overlook[] many of the most important forms of platform decisionmaking” and “lock[] in a form of oversight that is limited in its ambition.”30 In short, Douek concludes, the stylized picture is likely to produce “accountability theater rather than actual accountability.”31
But who is responsible for this juridical discourse around content moderation? And why paint such a misleading and incomplete picture?
This Response sheds light on these questions and builds on Douek’s observations. But while she frames her critique as a story of scholars leading lawmakers astray, I center the role of platforms in cultivating the stylized narrative that dominates popular, academic, and legis-lative debates.32 To do so, I use the example of Facebook’s Oversight Board — in some ways, the embodiment of platforms’ juridical discourse because the Board’s creators justified its role through a theory of separation of powers and the image of a supreme court.33 Excavating the history behind the Board illuminates how scholars, lawmakers, and platforms shape discourse in this space.34
Like Douek’s tale, mine is a cautionary one. Based partly on fieldwork I conducted as the Board took shape, I reveal how and why key figures at Facebook and the Board exploited legal analogies when portraying this novel institution and justifying its potential expansion to oversee other platforms.35 Though my main focus is on Facebook and its Board, my research offers insights applicable across platforms, especially dominant incumbents that exert the greatest power. Without acknowledging how companies are complicit in painting the stylized picture of content moderation, we’ll struggle to confront the “stickiness” of deceptive narratives that shape what Douek calls the “first wave” of platform-governance discourse.36
Debates about platform governance are evolving in legislative chambers, public discussions, and company boardrooms. At this critical juncture, we should scrutinize how platforms entrench their power and the discourses that influence decisionmakers.37 While external oversight could play some role, we should be skeptical of claims that a body like Facebook’s Board will meaningfully enhance users’ participation in governance or restrain a platform’s discretion.38 The Board won’t accomplish either goal, and expansion across the industry can’t fix its defects.39
Following this account, I briefly sketch an idea of platform federalism to assist a “second wave” of discourse and regulatory efforts. Federal systems can promote liberty, innovation, competition, pluralism, and expertise — values important in any scheme of platform governance. Lessons from traditional federalism could guide us in regulating platform power and fostering healthier digital environments, whether through law, policy, or technology.40
This Response proceeds in two Parts. Part I explores the Board’s past and its possible futures. Part II casts a critical eye over the juridical discourse that has legitimized and empowered the Board and surveys Facebook’s motives for adopting this discourse. The conclusion suggests that values and tools associated with federalism, rather than separation of powers, offer better guidance for platform governance.
Continue Reading in the Full PDF
* Assistant Professor, University of Georgia School of Law; Affiliate Faculty, University of Georgia Institute for Women’s Studies and Grady College of Journalism and Mass Communications; Affiliate Researcher, Clinic to End Tech Abuse at Cornell University. I’m especially grateful for generative conversations with Zohra Ahmed, Enrique Armijo, Jack Balkin, Hannah Bloch-Wehba, Kiel Brennan-Marquez, Rebecca Crootof, Evelyn Douek, Robert Gorwa, Daniel Halberstam, Erin Miller, Joe Miller, Ngozi Okidegbe, Laura Phillips Sawyer, Natália Pires de Vasconcelos, Robert Post, Logan Sawyer, Moritz Schramm, Gil Seinfeld, Alicia Solow-Niederman, Christian Turner, and Carly Zubrzycki, as well as feedback I received on earlier versions of this piece at the Freedom of Expression Scholars Conference, Junior Tech Law Scholars Workshop, Media Law and Policy Scholars Conference, Stanford Law School, Uni-versity of Arkansas School of Law, University of Oxford, Wake Forest Law School, and Yale Law School. Ailen Data provided terrific research assistance. My thanks also to the superb Harvard Law Review editors, whose insights and care substantially improved this piece. My work here draws on several years of fieldwork, including reviews of internal documents, anal-yses of public statements, and interviews with activists, academics, members and staff of Fa-cebook’s Oversight Board, and current and former platform employees. I was fortunate to conduct this research around the world — in Berlin, Buenos Aires, London, Menlo Park, New Haven, Oxford, Rio de Janeiro, Salt Lake City, São Paulo, and Washington, D.C. — thanks to generous support from the Andrew W. Mellon Foundation and the Yale MacMillan Center for International and Area Studies.