The Internet has changed the way we speak — and also the way we disrupt speech. In the ever-protean First Amendment jurisprudence of free speech, novel forms of communication on the web should make us proceed with caution as we pour new wine into old skins. Recently, in Garnier v. O’Connor-Ratcliff,1 the Ninth Circuit held that the First Amendment restricts the ability of public officials to block private individuals on social media. While the court reached the right outcome, its analysis of relevant government interest hinged too heavily on analogizing to situations arising in physical fora. Such reasoning leaves little room for acknowledging unprecedented speech disruptions enabled by social media. Shifting attention toward the “felicity conditions” of speech — conditions that must be satisfied in order for speech to achieve its intended effect — may facilitate a more precise understanding of speech disruption.
Michelle O’Connor-Ratcliff and T.J. Zane (“Trustees”) were elected to the Poway Unified School District (PUSD) Board of Trustees in November 2014.2 While running for election, they each created public Facebook pages to promote their campaigns.3 They continued to use the pages to announce PUSD-related information and solicit public opinion about the Board’s decisions after assuming office.4 In 2016, O’Connor-Ratcliff created a public Twitter page for similar uses.5 These social media spaces, which were identified as official pages of government officials, allowed members of the public to reply to original posts made by the Trustees in the form of comments or to register nonverbal reactions.6
Christopher and Kimberly Garnier, parents of children in PUSD schools, began leaving comments on these pages sometime in 2015.7 The Garniers had been active critics of the Board for years, participating in public meetings and emailing the Board to express concerns about race relations in the District and the financial misconduct of PUSD’s superintendent at the time.8 Frustrated by the PUSD’s lack of response, the Garniers posted lengthy and repetitive comments on the Trustees’ Facebook and Twitter pages.9 For example, Christopher Garnier left nearly identical comments on 42 separate posts on O’Connor-Ratcliff’s Facebook page; he also posted 226 identical replies to her Twitter page within ten minutes.10 The Trustees initially responded by deleting or hiding these comments individually; around October 2017, they blocked the Garniers from their social media pages.11 Later, the Trustees used the “word filter” function on their Facebook pages to prevent com-ments containing designated words from being posted.12 The broad list of filtered words practically disabled any viewer from posting new comments.13
The Garniers then filed suit in federal court under 42 U.S.C. § 1983, claiming that the Trustees could not block them on social media consistent with the First Amendment.14 After a bench trial, Judge Benitez found for the Garniers.15 He held that the Trustees acted under color of state law and that their social media pages were designated public fora.16 Judge Benitez found that the initial blocking of the Garniers served the substantial government interest of “facilitat[ing] transparency in government” and “promoting online interaction with constituents through social media.”17 In his view, the blocking also constituted a narrowly tailored content-neutral regulation because it was based on the repetitive nature of the Garniers’ comments, rather than on their criticism of the Board.18 However, he concluded that the continued blocking of the Garniers for the next three years was no longer narrowly tailored to the transparency interest.19 He cautioned that the defendants may legitimately reblock the Garniers should they repeat their “repetitive and largely unreasonable behavior”20 and that the defendants may also adopt content-neutral rules of decorum.21 Both parties appealed.22
The Ninth Circuit affirmed.23 Writing for a unanimous panel, Judge Berzon24 ruled that the Trustees “violate[d] the First Amendment by creating a publicly accessible social media page related to [their] official duties and then blocking certain members of the public from that page.”25 Noting that a successful § 1983 claim requires state action, she chose the “nexus test” — which asks whether there exists “such a close nexus between the State and the challenged action that the seemingly private behavior may be fairly treated as that of the State itself” — as the appropriate test in the instant case.26 She applied the three-pronged test announced in Naffe v. Frey27 to demonstrate that the Trustees’ “use of their social media pages qualifie[d] as state action under § 1983.”28 First, the Trustees “purport[ed] . . . to act in the performance of [their] official duties,” as evidenced by their self-identification as government officials and the chief use of the social media pages to announce PUSD-related information.29 Second, the significant number of followers on the pages and the Trustees’ active solicitation of “constituent input about official PUSD matters”30 illustrated “the purpose and effect of influencing the behavior of others.”31 Third, the informative function of the social media pages related meaningfully to the Trustees’ “governmental status” and “to the performance of [their] duties.”32
Next, the court held that the Trustees “violated the First Amendment when they blocked the Garniers from their social media pages.”33 Judge Berzon analyzed the issue through the lens of the public forum doctrine, which scrutinizes speech regulation based on the category of forum regulated.34 Focusing on the pages’ open access to the public and the initial lack of any content regulation policy, she found the Facebook pages prior to the implementation of the word filter and O’Connor-Ratcliff’s Twitter page to be designated public fora — in which restrictions must be both content neutral and “narrowly tailored to serve a significant government interest.”35 However, once the Trustees disabled comments via the word filter, she found that the pages turned into limited public fora, in which all reasonable viewpoint-neutral restrictions are permissible.36 According to Judge Berzon, whether the initial blocking of the Garniers was content neutral posed “a close question.”37 Even if content neutral, however, the blocking served no significant government interest because the technical features of the Facebook and Twitter pages minimized the extent of disruption caused by the repetitive comments, either by trimming lengthy comments or limiting their visibility.38 Judge Berzon cited the holding from Norse v. City of Santa Cruz39 that a significant government interest in forum maintenance required a showing of “actual disruption” on the forum, which does not encompass “constructive disruption, technical disruption, virtual disruption, nunc pro tunc disruption, or imaginary disruption.”40 The visual clutter of the Garniers’ comments fell short of the regulation-worthy disruption that may be caused by an unruly speaker in a physical city hall meeting.41
Neither was the blocking narrowly tailored, given that the Trustees could have pursued alternative solutions, such as deleting individual comments or establishing rules of decorum.42 In addition, the continued blocking of the Garniers after the use of the word filter exceeded the bounds of permissible speech restrictions in a limited public forum.43 By effectively eliminating all comments, the word filter rendered the continued ban superfluous and, therefore, unreasonable.44
While the Ninth Circuit reached the correct outcome, its analytic approach is limited to recognizing forms of speech disruption in physical spaces. The features of social media that distinguish it from traditional fora should require a recalibration of the philosophical assumptions underlying speech. The court’s treatment of the significant-government-interest prong in its forum analysis overfocused on analogizing to disruptions of speech in physical fora and failed to consider novel forms of “actual disruption” that social media enables. A more precise understanding of speech disruption might pay attention to speech that thwarts the conditions necessary for other speech to be successful.
Judge Berzon’s attempt to transplant existing free speech rules onto social media is symptomatic of a general tendency to treat the virtual forum as just another species of the physical forum. In Packingham v. North Carolina,45 the Supreme Court characterized social media as the contemporary version of the quintessential public forum for exchanging views, such as “a street or a park.”46 Lower courts have interpreted this language as establishing the presumption that “social media is entitled to the same First Amendment protections as other forms of media.”47
This inclination toward doctrinal consistency should not, however, ignore the caveat that certain forms of speech disruption are unique to cyberspace. The suggested equivalency between physical and virtual fora focuses on the nature of activities — what things people do — and not on the mechanism of communication — how people do things. While Packingham astutely observed that platforms such as Facebook, LinkedIn, and Twitter allow members of the public to “debate religion[,] . . . advertise for employees, . . . [and] petition their elected representatives,”48 the issue in that case — the constitutionality of a law banning sex offenders from even viewing, rather than posting on, social media49 — did not require noting that communication on these platforms occurs asynchronously or that algorithms may influence page accessibility.50
Similarity in the nature of communicative activities does not entail similarity in the modes of disruption to which those activities are vulnerable. The visual-clutter effect contemplated by Garnier itself is telling. While it may be tempting to treat this effect as a virtual analog of the unruly speaker in a city hall meeting, as the court did, visual clutter constitutes a different kind of disturbance altogether from auditory bombardment. For example, comments on a webpage remain in space, whereas spoken words dissipate.51 In a matter of a few minutes, a single commenter can create lasting disruption; the city hall interrupter can sustain only a temporary disturbance. Garnier neglected to explore such differences, apparently deeming the meaning of “actual disruption” self-evident even in the virtual context.52
Sketching the criteria for actual disruption, in fact, deserves more attention. The task requires moving past imperfect analogies to the brick-and-mortar forum. The fundamental question of how speech gets disrupted can be recast to ask how speech fails, which in turn demands an understanding of how speech succeeds. Philosopher of language J.L. Austin’s seminal speech act theory can aid this enterprise. Challenging the traditional view that speech primarily states propositions, Austin observed that we do many other things with words — promising, commanding, betting, and so forth.53 This view of speech as the performance of speech acts enables a more precise assessment of the success or failure of speech by examining a speech act’s felicity conditions — conditions that must be satisfied in order for a speech act to achieve its intended effect.54 For example, the speech act of commanding requires, among other things, the commander’s authority as a felicity condition, in much the same way that the act of signing a contract must be performed by the correct signer to take effect. A low-ranking soldier uttering the word “Charge!” at a captain fails to command.
The dearth of discussion on what qualifies as actual disruption — in both cyberspace and physical space55 — is bound to cause thorny issues as online speech outruns comparisons to traditional speech. Given this uncertainty, Austin’s framework offers an analytic schema worth exploring: any speech act that thwarts the felicity condition of someone else’s speech act can plausibly be described as actually disrupting, or even silencing, that speech act. The prima facie plausibility of this schema lies in its explanatory capacity to account for traditional speech disruptions. One obvious way in which the city hall interrupter disrupts is by preventing other speakers from being heard. After all, most communicative statements share the felicity condition that the audience hear them; these speech acts must “secure uptake.”56
This flexible conception of speech disruption would invite courts and scholars to contemplate the unique felicity conditions of speech acts on social media. Garnier hints at one such condition by entertaining the defendants’ argument that the repetitive comments had “a net effect of slightly pushing down” other posts.57 The Trustees’ speech acts of informing and announcing, as well as other viewers’ speech acts of opining via comments, all depend on the felicity condition that their text be visible. In other words, the security of uptake on social media depends on visibility, which may be impaired by repetitious comments. Of course, the act of posting a comment does not depend on visibility; a mouse click accomplishes the posting, whether that comment reaches an audience or not. The natural assumption, however, is that the Trustees’ social media pages differ from, say, a YouTube comment section, in that they were meant to facilitate dialogue rather than merely allow unilateral self-expression, whether the speakers are the Trustees announcing news or parents criticizing the Trustees.
Social media is also especially vulnerable to trolling, spamming, and disinformation. Trolls, people “who post deliberately inflammatory messages online,”58 take advantage of the unsupervised, anonymous, and asynchronous nature of online forum discussions. While a physical forum is better equipped to handle such comments — by concurrent feedback from other participants, for example — an online forum lacks similar measures. By inciting irrelevant discussions, trolling not only dilutes the substance of the intended discussion hosted by a forum but also risks creating the perception that all comments on a forum are insincere.59 An observer will have difficulty distinguishing trolls from sincere participants and thereby attribute “a generalized gross insincerity” to the forum.60 Hence, trolling attacks the felicity condition that speakers on a forum be trusted with some minimal level of sincerity. More invidious means of speech disruption might capitalize on social media algorithms. For instance, spam comments can cause Instagram’s algorithm to recognize a page as “spammy” and decrease the likelihood of the page appearing to new viewers.61
Still more problematic are disinformation campaigns, which pose grave threats in virtual space: “[Social media] does not possess the filters and vetting systems of traditional news media to process what is true and what is false. Thus, the platforms enable false information to spread widely and quickly.”62 Given the current First Amendment doctrine’s general tolerance of false speech,63 the speech act framework might equip courts with an alternative justification for regulating some types of disinformation by considering its disruptive effect on free speech. When fake news targets a specific individual, for example, it can undermine her credibility, disabling her from securing the minimal amount of trust necessary to perform basic speech acts.64 In these cases, the problem of disinformation can be construed not as a conflict between liberty and truth, but as “a conflict between liberty and liberty.”65
These examples may raise concerns about aggressive government regulation of speech. However, the analysis of actual disruption goes only to the prong of significant government interest and leaves intact the other prong of narrow tailoring, which requires that the government exhaust “easily available alternative modes of regulation.”66 Indeed, Garnier’s analysis that the Trustees could have “delet[ed] only repetitive comments rather than blocking the Garniers entirely” would preserve the holding in favor of the Garniers, even had the court acknowledged a significant government interest in forum maintenance.67
Today, social media and other cyber platforms are increasingly the dominant venues of public discourse. The characterization of these platforms as the “modern public square,”68 when taken too literally, risks missing the unique attributes of online speech that render it vulnerable to new forms of disruption. Going forward, courts should be prepared to recalibrate the philosophical assumptions underlying speech so as to entertain a broader array of speech disruptions.