First Amendment: Speech
Section 230 as First Amendment Rule
Section 230 of the Communications Decency Act of 19961×1. 47 U.S.C. § 230 (2012). has been lauded as âthe most important law protecting internet speechâ and called âperhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive.â2×2. CDA 230: The Most Important Law Protecting Internet Speech, Electronic Frontier Found., https://www.eff.org/issues/cda230 [https://perma.cc/JN9Y-TVNT]; accord Jack M. Balkin, Old-School/New-School Speech Regulation, 127 Harv. L. Rev. 2296, 2313 (2014) (âSection 230 immunity . . . ha[s] been among the most important protections of free expression in the United States in the digital age.â); David Post, A Bit of Internet History, or How Two Members of Congress Helped Create a Trillion or So Dollars of Value, Wash. Post: Volokh Conspiracy (Aug. 27, 2015), http://wapo.st/1K9AmTh [https://perma.cc/S4LN-WE9P]. The lawâs tremendous importance stems from the shield it provides to websites against suits based on torts committed by users. For instance, Wikipedia cannot be held liable for defamation posted by a user. This intermediary liability protection encourages websites to engage in content moderation without fear that their efforts to screen content will expose them to liability for defamatory material that slips through. Without this protection, websites would have an incentive to censor constitutionally protected speech in order to avoid potential lawsuits.3×3. See infra Part III, pp. 2032â47.
But § 230 is under attack on multiple fronts.4×4. Cindy Cohn & Jamie Williams, 20 Years of Protecting Intermediaries: Legacy of âZeranâ Remains a Critical Protection for Freedom of Expression Online, Law.com: The Recorder (Nov. 10, 2017, 8:31 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/20-years-of-protecting-intermediaries-legacy-of-zeran-remains-a-critical-protection-for-freedom-of-expression-online/ [https://perma.cc/U7ER-JPN3]. From the popular media5×5. See, e.g., Arthur Chu, Mr. Obama, Tear Down This Liability Shield, TechCrunch (Sept. 29, 2015), https://techcrunch.com/2015/09/29/mr-obama-tear-down-this-liability-shield/ [https://perma.cc/C9QW-K965]. to Capitol Hill,6×6. See, e.g., Eric Goldman, How SESTA Undermines Section 230âs Good Samaritan Provisions, Tech. & Marketing L. Blog (Nov. 7, 2017), http://blog.ericgoldman.org/archives/2017/11/how-sesta-undermines-section-230s-good-samaritan-provisions.htm [https://perma.cc/YJ75-343D] (addressing congressional efforts to amend § 230). some view the law with disdain. Various scholars have also heavily criticized § 230, saying amending the law would help to reduce defamation online.7×7. See, e.g., Ann Bartow, Section 230 Keeps Platforms for Defamation and Threats Highly Profitable, Law.com: The Recorder (Nov. 13, 2017, 12:19 PM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/section-230-keeps-platforms-for-defamation-and-threats-highly-profitable/ [https://perma.cc/ZMJ3-DEAN]. And, in the courts, 2016 was perhaps a nadir for § 230, as judges repeatedly adopted narrow readings of the law.8×8. Eric Goldman, Ten Worst Section 230 Rulings of 2016 (Plus the Five Best), Tech. & Marketing L. Blog (Jan. 4, 2017), http://blog.ericgoldman.org/archives/2017/01/ten-worst-section-230-rulings-of-2016-plus-the-five-best.htm [https://perma.cc/4N9G-3UTU] (collecting cases).
Against this current, this Note provides the first thorough argument that the First Amendment requires § 230âs bar on holding websites liable for the defamation of their users. While the First Amendment does not ârequireâ the federal statute, of course, this Note argues that the First Amendment rule should be the same as § 230âs rule. Under the Supreme Courtâs First Amendment case law on defamation, the private censorship produced by defamation liability for internet intermediaries cannot be justified by a government interest in defamation law. Recognizing § 230âs more stable constitutional provenance explains why courts traditionally adopted a broad reading of the law, demonstrates the lawâs substantive importance, and helps predict what might occur should detractors succeed in achieving amendment by Congress.
Part I describes secondary liability for defamation and § 230. Part II explains the prevailing assumption among judges and scholars that the First Amendment does not require § 230. Part III then challenges this assumption, arguing that the Constitution protects internet intermediaries from liability for defamation committed by their users. The censorship that would result from internet intermediary liability for defamation cannot be saved by the governmentâs interest in imposing liability.9×9. This Note seeks to demonstrate the constitutional relevance of the policy-based arguments in favor of § 230, though it does not itself engage in a full-fledged policy analysis. Part IV discusses this Noteâs implications and concludes.
I. Defamation, Intermediary Liability, and § 230
Defamation is a common law tort that protects individuals against the publication of harmful false statements about them.10×10. Restatement (Second) of Torts § 558 (Am. Law Inst. 1977). âPublicationâ includes intentional and unreasonable failure to remove defamatory material under oneâs control.11×11. Id. § 577(2). Distributors, such as booksellers, may be held liable for defamation they transmit if they knew or had reason to know of its defamatory nature, but are not under a general duty to screen the items they retail.12×12. Id. § 581 & cmts. d & e.
In the 1990s, courts began to apply these doctrines to internet services. In Cubby, Inc. v. CompuServe Inc.,13×13. 776 F. Supp. 135 (S.D.N.Y. 1991). a district court held that an internet service provider was not liable for allegedly defamatory content in one of its online forums because it had âno more editorial controlâ than would âa public library, book store, or newsstand,â14×14. Id. at 140. and therefore was a mere distributor that did not know or have reason to know of the content.15×15. Id. at 140â41. The court held there was no genuine issue of material fact as to knowledge. Id. at 141. Later, in Stratton Oakmont, Inc. v. Prodigy Services Co.,16×16. 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995). a state court held that because an owner of online bulletin boards had exercised âeditorial controlâ over offensive content, it could be held liable as a publisher of defamatory posts.17×17. Id. at *4â5. The website also held itself out as engaging in moderation. Id. This pair of cases posed a troubling choice for websites. If they took a hands-off approach to moderation, they received significant protection from liability. However, if they sought to proactively regulate content on their websites, they might face liability.18×18. Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997) (noting that Stratton Oakmont created âdisincentives to selfregulation [sic]â). This dilemma âcreated a minor sensation.â19×19. David R. Sheridan, Zeran v. AOL and the Effect of Section 230 of the Communications Decency Act upon Liability for Defamation on the Internet, 61 Alb. L. Rev. 147, 159 (1997) (citing Robert Cannon, The Legislative History of Senator Exonâs Communications Decency Act: Regulating Barbarians on the Information Superhighway, 49 Fed. Comm. L.J. 51, 62 nn.51â52 (1996)).
These concerns were heard on Capitol Hill when Congress enacted section 509 of the Communications Decency Act (codified at 47 U.S.C. § 230), which overruled Stratton Oakmont.20×20. Id. at 150â51; Cannon, supra note 19, at 61â63, 62 nn.51â52. Section 230 provides that no website that relies on user-generated content âshall be treated as the publisher or speaker of any information provided by another information content provider.â21×21. 47 U.S.C. § 230(c)(1) (2012). Therefore, a website cannot be held liable for defamation posted by a user even if the website knows or has reason to know of the defamatory content.22×22. Zeran, 129 F.3d at 331â33. Of course, if an intermediary website itself created defamatory content, it could be held liable23×23. See 47 U.S.C. § 230(f)(3) (defining âinformation content providerâ); id. § 230(c)(1) (exempting websites from liability only for information âprovided by another information content providerâ (emphasis added)). â for example, if Facebook itself wrote a blog post on its website defaming the creators of Google Plus. In other words, websites are not immune from defamation claims. They are merely protected from being held secondarily liable for the defamatory statements of others.
In interpreting § 230, courts have largely followed through on congressional hopes of providing intermediary liability protection to websites for defamation claims.24×24. See David S. Ardia, Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act, 43 Loy. L.A. L. Rev. 373, 452 (2010) (âDefamation-type claims were far and away the most numerous claims in the section 230 case law, and the courts consistently held that these claims fell within section 230âs protections.â (footnotes omitted)); see also, e.g., Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003); Green v. Am. Online, 318 F.3d 465 (3d Cir. 2003). Courts have reached inconsistent results in nontraditional cases outside defamation law. See, e.g., Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016) (holding that § 230 did not protect the owner of the website Model Mayhem from a failure-to-warn claim); Recent Case, Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016), 130 Harv. L. Rev. 777, 777 (2016) (criticizing the Ninth Circuitâs decision for âdeclin[ing] to adopt an alternative understanding of the statute more in line with the lawâs stated policy objectivesâ). For example, in the âseminalâ25×25. Cathy Gellis, The First Hard Case: âZeran v. AOLâ and What It Can Teach Us About Todayâs Hard Cases, Law.com: The Recorder (Nov. 10, 2017, 1:02 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/the-first-hard-case-zeran-v-aol-and-what-it-can-teach-us-about-todays-hard-cases/ [https://perma.cc/2U75-X7J8]; see Patrick J. Carome & Cary A. Glynn, Serendipity and Internet Law: How the âZeran v. AOLâ Landmark Almost Wasnât, Law.com: The Recorder (Nov. 10, 2017, 8:29 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/serendipity-and-internet-law-how-the-zeran-v-aol-landmark-almost-wasnt/ [https://perma.cc/J95X-ZRXB]. case Zeran v. America Online, Inc.,26×26. 129 F.3d 327. thenâChief Judge Wilkinson held that § 230 protected America Online from a defamation claim based on messages posted on its bulletin boards.27×27. Id. at 328. Judge Wilkinson explained § 230 succinctly: it âcreates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.â28×28. Id. at 330. The law bars suits that would hold websites liable for decisions about whether and how to moderate user-generated content.29×29. Id. at 331. As to congressional purpose, Judge Wilkinson identified first that the âspecter of tort liability in an area of such prolific speech would have an obvious chilling effectâ and second that § 230 encourages websites to moderate content without fear of liability.30×30. Id.
II. The Assumption that the First Amendment Does Not Require § 230
Judges and academics are nearly in consensus in assuming that the First Amendment does not require § 230.31×31. For courts, see, for example, Batzel v. Smith, 333 F.3d 1018, 1020 (9th Cir. 2003). Since the enactment of § 230, courts have had little reason to reach this constitutional question. In Cubby, decided before the enactment of § 230, while the court cited a First Amendment case to support its holding, it did not discuss the notion that the First Amendment might provide even more protection to websites.32×32. Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 139 (S.D.N.Y. 1991) (citing Smith v. California, 361 U.S. 147, 152â53 (1959)). In Stratton Oakmont, the court acknowledged that the websiteâs moderation system âmay have a chilling effect on freedom of communication in Cyberspace,â even though the court in effect required this type of website to employ similar moderation to avoid liability.33×33. Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *5 (N.Y. Sup. Ct. May 24, 1995). There too the court did not consider First Amendment concerns. As one district court put it, âSection 230 reflects a âpolicy choice,â not a First Amendment imperative, to immunize ISPs from defamation . . . driven, in part, by free speech concerns.â34×34. Gucci Am., Inc. v. Hall & Assocs., 135 F. Supp. 2d 409, 421 (S.D.N.Y. 2001) (quoting Zeran, 129 F.3d at 330). More recently, in Gonzalez v. Google, Inc.,35×35. No. 16-CV-03282, 2017 WL 4773366 (N.D. Cal. Oct. 23, 2017). the court stated in passing that â[i]n the absence of the protection afforded by section 230(c)(1), one who published or distributed speech onlineâ may be liable for defamation even if the website had no knowledge of the content.36×36. Id. at *4 (citing Batzel, 333 F.3d at 1026â27) (ignoring First Amendment question). In 2016, a First Circuit panel acknowledged that âFirst Amendment values . . . driveâ § 230, but wrote that this rule could be amended via mere legislation.37×37. Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 29 (1st Cir. 2016), cert. denied, 137 S. Ct. 622 (2017). To be sure, perhaps the panel would differentiate between the constitutional law of defamation and potential criminal liability for sex trafficking.
Academics share the assumption that the First Amendment does not require § 230.38×38. See, e.g., Jack M. Balkin, The Future of Free Expression in a Digital Age, 36 Pepp. L. Rev. 427, 434 (2009) (â[Section 230] is not required by First Amendment doctrine.â); Rebecca Tushnet, Power Without Responsibility: Intermediaries and the First Amendment, 76 Geo. Wash. L. Rev. 986, 1008 n.95 (2008) (âBefore the CDA, the assumption in the law reviews tended to be that the [New York Times v. Sullivan] standard was the best to be hoped for as a constitutional matter.â). As Professor Rebecca Tushnet writes, the âFirst Amendment does not currently require a particular solutionâ for internet intermediary defamation liability.39×39. Tushnet, supra note 38, at 988. In defending § 230, Professor Jeff Kosseff admits that its âimmunity extends beyond intermediary protections provided by the First Amendment.â40×40. Jeff Kosseff, Defending Section 230: The Value of Intermediary Immunity, 15 J. Tech. L. & Polây 123, 136 (2010). And Professor William H. Freivogel puts it bluntly: âIt would not be accurate to argue that the First Amendment requires Section 230.â41×41. William H. Freivogel, Does the Communications Decency Act Foster Indecency?, 16 Comm. L. & Polây 17, 48 (2011). In canvassing the First Amendment options for addressing how internet platforms moderate content, one scholar does not address the possibility of § 230 as a First Amendment rule.42×42. Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1613â15 (2018). Other commentators seem to share this assumption as well.43×43. See, e.g., James Grimmelmann, No ESC, Law.com: The Recorder (Nov. 10, 2017, 2:03 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/no-esc/ [https://perma.cc/B2ZW-HQHJ] (referring to § 230 as âsubconstitutional free speech lawâ). But cf. Cecilia Ziniti, Note, The Optimal Liability System for Online Service Providers: How Zeran v. America Online Got It Right and Web 2.0 Proves It, 23 Berkeley Tech. L.J. 583, 605â06 (2008) (arguing briefly that a notice-based system would be unconstitutional). Moreover, the many scholars who have criticized § 230 do not seem to believe that a response is necessary against the charge that the rule is mandated by the Constitution. For instance, two critics simply write that § 230 is ânot required by the First Amendment.â44×44. Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 Fordham L. Rev. 401, 419 (2017); see also Heather Saint, Note, Section 230 of the Communications Decency Act: The True Culprit of Internet Defamation, 36 Loy. L.A. Ent. L. Rev. 39, 69 (2015).
III. Why the First Amendment Requires § 230
This Part begins by explaining First Amendment scrutiny of defamation law and then argues that, under that case law, imposing defamation liability on internet intermediaries is unconstitutional.
Like § 230, the First Amendment operates as a constraint on the scope of defamation law. While some regulations of speech may be reviewed, for example, under the âgenericâ strict scrutiny test,45×45. Richard H. Fallon, Jr., Strict Judicial Scrutiny, 54 UCLA L. Rev. 1267, 1292 (2007). other types of speech are governed by specific tests devised by the Court âon a largely ad hoc basis.â46×46. Id. at 1291. The specific rules that the Court devised to govern defamation law, for instance in the 1964 landmark case New York Times Co. v. Sullivan,47×47. 376 U.S. 254 (1964). exemplify this ad hoc approach.
In New York Times, the Supreme Court held that under the First Amendment public officials alleging defamation must show the defen-dant acted with âactual maliceâ â knowledge of falsity or recklessness toward this potential.48×48. Id. at 279â80. The Court reasoned that not requiring actual malice could stifle vital discourse because of the fear of civil liability.49×49. Id. at 277â80. âA rule compelling the critic of official conduct to guarantee the truth of all his factual assertions,â the Court feared, leads to âself-censorship.â50×50. Id. at 279. Potential defendants might worry that they could not prove in court the legality of their statements or afford expensive litigation and therefore âmake only statements which âsteer far wider of the unlawful zone.ââ51×51. Id. (quoting Speiser v. Randall, 357 U.S. 513, 526 (1958)).
Later, in Gertz v. Robert Welch, Inc.,52×52. 418 U.S. 323 (1974). the Court held that private individuals alleging defamation did not need to meet an actual malice requirement.53×53. Id. at 347. The Court noted that âpunishment of error runs the risk of inducing a cautious and restrictive exercise of the constitutionally guaranteed freedoms of speech and press.â54×54. Id. at 340. The Court explained that the interest supporting defamation law is âthe compensation of individuals for the harm inflicted on them by defamatory falsehood.â55×55. Id. at 341. This interest, the Court explained, emanated from the importance of protecting individualsâ reputations.56×56. Id. (citing Rosenblatt v. Baer, 383 U.S. 75, 92 (1966) (Stewart, J., concurring)). In resolving the âtensionâ between this interest and freedom of speech, the Court sought âbreathing spaceâ for the right to free speech by bestowing âstrategic protectionâ under the New York Times standard.57×57. Id. at 342 (quoting NAACP v. Button, 371 U.S. 415, 433 (1963)). The Court distinguished New York Times on two grounds. First, public officials and figures are better able to engage in counterspeech, whereas private individuals find it more difficult to refute published falsehoods.58×58. Id. at 344. Second, public officials and figures, unlike private individuals, voluntarily assume the risk of being subject to falsehoods.59×59. Id. at 345. Additionally, because the Court ârequire[d] that state remedies for defamatory falsehood reach no farther than is necessary to protect the legitimate interest involvedâ60×60. Id. at 349. in order to balance âcompensating private individuals for wrongful injury to reputationâ61×61. Id. at 348. with âthe constitutional command of the First Amendment,â62×62. Id. at 349. it held unconstitutional punitive damages awarded with no actual malice.63×63. Id. at 349â50. The Court held that awarding punitive damages, compensation beyond actual injury, was both less valuable and more prone to abuse. Id. at 350.
In devising the rules governing defamation claims, and in other areas of First Amendment doctrine, the Supreme Court has engaged in a methodology of constitutional reasoning grounded in optimizing practical results. As Professor Richard Fallon explains, in developing various areas of constitutional doctrine, the Supreme Court must make determinations about empirical matters that inform the rules it crafts.64×64. Richard H. Fallon, Jr., The Supreme Court, 1996 Term â Foreword: Implementing the Constitution, 111 Harv. L. Rev. 54, 62 (1997). In New York Times and Gertz, Fallon recounts, the Court did not merely âbalance, in an abstract way,â freedom of speech and the interest undergirding defamation law.65×65. Id. at 63. Instead, it also made âmore concrete, empirical, and predictive assessmentsâ regarding the âproclivity of the press to engage in self-censorship under alternative liability regimes,â âthe proportion of truthful and untruthful assertions that would be chilled by such regimes,â âthe harms that would be done by false speech,â and âthe benefits of truthful speech that would be forgone under various imaginable rules.â66×66. Id. This accommodation is evidenced in Gertzâs tailoring of its rule to require actual malice for punitive damages. More dramatically, Professor Daniel Farber identifies New York Times as an example of the notion that âFirst Amendment doctrines reflect the fear that certain laws overdeter speech and thus lead to a suboptimal amount of total information disseminated in society,â67×67. Daniel A. Farber, Free Speech Without Romance: Public Choice and the First Amendment, 105 Harv. L. Rev. 554, 568 (1991). in order to demonstrate that First Amendment doctrines embody âpublic choice theory â that is, the application of economics methodology to political institutions.â68×68. Id. at 555. Finally, implementing this policy-based method of constitutional reasoning often involves what Professor David Faigman terms âconstitutional fact-finding,â the Courtâs use of empirical claims to create constitutional law.69×69. See David L. Faigman, âNormative Constitutional Fact-Findingâ: Exploring the Empirical Component of Constitutional Interpretation, 139 U. Pa. L. Rev. 541, 550 (1991). As Fallon agrees, New York Times and Gertz are not âatypical in their reliance on empirical, predictive calculations,â70×70. Fallon, supra note 64, at 63. and Faigman demonstrates that the Supreme Court routinely makes assumptions about empirical propositions to support constitutional decisionmaking.71×71. See Faigman, supra note 69, at 550.
In employing this practical optimization methodology in New York Times, the Court was comfortable calibrating a rule for public officials that intentionally âoverenforce[s]â constitutional goals.72×72. Fallon, supra note 64, at 63, 65. Indeed, as Professor David Strauss observes, in constitutional law, prophylactic rules are both ubiquitous and necessary.73×73. David A. Strauss, The Ubiquity of Prophylactic Rules, 55 U. Chi. L. Rev. 190, 190 (1988). Strauss notes that from Miranda warnings to strict scrutiny, constitutional law is replete with rules aimed at protecting rights through overenforcement.74×74. Id. at 205, 208, 209. Expressly building on Straussâs foundation, Professor Daryl Levinson identifies â[d]efamation law [as] another clear example of a First Amendment prophylactic rule.â75×75. Daryl J. Levinson, Rights Essentialism and Remedial Equilibration, 99 Colum. L. Rev. 857, 902 n.186 (1999). Agreeing that prophylactic rules are ubiquitous, Levinson explains that constitutional rules necessarily âdepend on such factors as the administrability and expense of a more precise rule and the error costs of false negatives and false positives.â76×76. Id. at 904.
The practical optimization the Supreme Court employed in New York Times and Gertz to calibrate such a First Amendment prophylactic rule suggests that the constitutionality of internet intermediary defamation liability should be assessed along two dimensions that mirror the analysis in those cases: the degree to which this type of defamation liability, first, impinges on protected speech and, second, promotes a governmental interest. Those cases addressed the First Amendment constraints on setting mental states for defamation liability, whereas this Note employs their framework to promote First Amendment constraints on secondary liability for defamation. This Note contends that the censorship that would result from internet intermediary liability for defamation cannot be saved by the governmentâs interest in imposing liability. In contrast to scholars and jurists who have paid these First Amendment questions relatively little attention, this Note intends to demonstrate the constitutional relevance of the policy-based arguments in favor of § 230, though this Note does not itself engage in a full-fledged policy analysis.
Without § 230 as the constitutional rule, internet intermediaries would limit a significant amount of constitutionally protected speech. The New York Times Court feared that without the requirement of actual malice, âwould-be critics of official conductâ would hesitate to speak.77×77. New York Times Co. v. Sullivan, 376 U.S. 254, 279 (1964). Internet intermediary liability implicates a specific variety of self-censorship â collateral censorship â which the New York Times Court explained by quoting Smith v. California78×78. 361 U.S. 147 (1959). at length.79×79. New York Times, 376 U.S. at 278â79 (quoting Smith, 361 U.S. at 153â54). What Professor Jack Balkin has termed âcollateral censorshipâ arises not when individuals limit their own speech based on a fear of liability, but rather âwhen A censors B out of fear that the government will hold A liable for the effects of Bâs speech.â80×80. J.M. Balkin, Essay, Free Speech and Hostile Environments, 99 Colum. L. Rev. 2295, 2296 (1999). In Smith, the Court held unconstitutional an ordinance that prohibited bookstores from possessing obscene books.81×81. Smith, 361 U.S. at 148, 155. In rejecting that strict liability rule, the Court explained that many âlegal devices and doctrines, in most applications consistent with the Constitution, . . . cannot be applied in settings where they have the collateral effect of inhibiting the freedom of expression, by making the individual the more reluctant to exercise it.â82×82. Id. at 150â51. While obscenity is not protected by the First Amendment, the ordinanceâs lack of a scienter requirement jeopardized citizensâ access to a variety of protected speech.83×83. Id. at 153. New York Times quoted from the following key passage84×84. New York Times, 376 U.S. at 278â79 (quoting Smith, 361 U.S. at 153â54). :
For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. . . . And the booksellerâs burden would become the publicâs burden, for by restricting him the publicâs access to reading matter would be restricted. . . . The booksellerâs limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the publicâs access to forms of the printed word which the State could not constitutionally suppress directly. The booksellerâs self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered.85×85. Smith, 361 U.S. at 153â54 (footnote omitted).
As in Smith, exposing internet intermediaries to liability for defamation communicated by their users would lead to collateral censorship.
First, content moderation to cope with intermediary liability is difficult, and therefore costly.86×86. Aaron Perzanowski, Comment, Relative Access to Corrective Speech: A New Test for Requiring Actual Malice, 94 Calif. L. Rev. 833, 858 n.172 (2006). When a website confronts potentially defamatory user-generated content, it must resolve questions of both law and fact. As to questions of law, there is no national law of defamation but instead a fifty-state patchwork.87×87. James R. Pielemeier, Constitutional Limitations on Choice of Law: The Special Case of Multistate Defamation, 133 U. Pa. L. Rev. 381, 384â391 (1985). Therefore, websites must resolve the choice of law inquiry regarding which stateâs law applies and then determine what that stateâs rule is.88×88. See id. at 391; Philip Adam Davis, Note, The Defamation of Choice-of-Law in Cyberspace: Countering the View that the Restatement (Second) of Conflict of Laws Is Inadequate to Navigate the Borderless Reaches of the Intangible Frontier, 54 Fed. Comm. L.J. 339, 340â42 (2002); Corey Omer, Note, Intermediary Liability for Harmful Speech: Lessons from Abroad, 28 Harv. J.L. & Tech. 289, 316â18 (2014). Moreover, defamation law abounds with privileges and exceptions. Even if a website determined that certain content would support a prima facie case for defamation, it would still need to determine the applicability of various privileges and exceptions.89×89. Cf. Meera Nair, Adjudication by Algorithm, Fair Duty (Jan. 3, 2018, 8:33 AM), https://fairduty.wordpress.com/2018/01/03/adjudication-by-algorithm/ [https://perma.cc/BQ5U-WHF6] (explaining that in the copyright context, the âentire list of exceptions is extensive and should be part of any algorithmic effort toâ moderate and remove potentially copyrighted content). Questions of fact are also difficult for websites to resolve, involving âconsiderable costs of investigation.â90×90. See Perzanowski, supra note 86, at 858 n.172. For example, a statement that a business often fails to meet its commercial obligations is not easily verifiable. To the extent that it is difficult for judges and juries to determine the truthfulness of potentially defamatory statements, it is even more difficult for intermediary websites to do so.91×91. See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Immunity, 87 Notre Dame L. Rev. 293, 301 (2011); Paul Sieminski & Holly Hogan, Why (Allegedly) Defamatory Content on WordPress.com Doesnât Come Down Without a Court Order, TechDirt (Feb. 7, 2018, 1:32 PM), https://www.techdirt.com/articles/20180206/10271639166/why-allegedly-defamatory-content-wordpresscom-doesnt-come-down-without-court-order.shtml [https://perma.cc/46P7-QCKY]. Even upon receiving notice that a statement is allegedly defamatory, a website does not know whether a complainant is correct or merely hoping to illegitimately induce takedown.92×92. See Seth F. Kreimer, Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link, 155 U. Pa. L. Rev. 11, 86 & n.238 (2006). In the copyright context, a large number of takedown requests to websites are illegitimate.93×93. See Takedown Hall of Shame, Electronic Frontier Found., https://www.eff.org/takedowns [https://perma.cc/PKB9-PR3N]. Some websites have experimented with artificial intelligence algorithms to moderate content.94×94. See Are Algorithms the Future of Content Moderation?, WebPurify (July 23, 2015) https://www.webpurify.com/blog/algorithms-future-content-moderation/ [https://perma.cc/Z3C7-J9LD]. However, algorithms have struggled to correctly moderate content: for example, differentiating between impermissible nudity and fine art.95×95. Id.; see also Nair, supra note 89; Natasha Duarte et al., Ctr. for Democracy & Tech., Mixed Messages? The Limits of Automated Social Media Content Analysis 5 (2017) https://cdt.org/files/2017/11/Mixed-Messages-Paper.pdf [https://perma.cc/M7Z7-PP5K]; Rhett Jones, Manâs YouTube Video of White Noise Hit with Five Copyright Claims, Gizmodo (Jan. 5, 2018, 10:05 AM), https://gizmodo.com/man-s-youtube-video-of-white-noise-hit-with-five-copyri-1821804093 [https://perma.cc/882V-LXGS]. It would be even more difficult for artificial intelligence to properly identify defamation and quite costly to develop that software. And humans are not happy performing the task.96×96. See Lauren Weber & Deepa Seetharaman, The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook, Wall St. J. (Dec. 27, 2017, 10:42 PM), https://www.wsj.com/articles/the-worst-job-in-technology-staring-at-human-depravity-to-keep-it-off-facebook-1514398398 [https://perma.cc/JG62-63XN]. It is difficult to quickly determine whether certain speech is merely critical or actionable defamation. These difficulties are amplified by the volume of content websites face. As Zeran recognized about moderating âmillions of postings,â97×97. Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997). â[a]lthough this might be feasible for the traditional print publisher, the sheer number of postings on interactive computer services would create an impossible burden in the Internet context.â98×98. Id. at 333 (citing Auvil v. CBS 60 Minutes, 800 F. Supp. 928, 931 (E.D. Wash. 1992)). Efforts to surmount these difficulties, and thus increase the accuracy of moderation to avoid intermediary liability, would be costly because those efforts require investments in labor, time, or technology.
Second, as Smith recognized, the difficulties and costs created by intermediary liability would cause many websites to engage in various forms of collateral censorship â often the least costly method of avoiding liability.99×99. See Smith v. California, 361 U.S. 147, 153 (1959) (noting that imposing liability on the bookseller would decrease the books on offer). In general, websites would err on the side of caution, defaulting to removing allegedly defamatory content instead of engaging in costly legal and factual investigation.100×100. Perzanowski, supra note 86, at 858 n.172. The cost to websites of collaterally censoring is very low, whereas the cost of not censoring content is much higher because that decision risks expensive litigation and adverse judgments.101×101. Id. Websites âmay be deterred fromâ permitting certain content, as New York Times explained, âeven though it is believed to be true and even though it is in fact true, because of doubt whether it can be proved in court or fear of the expense of having to do so.â102×102. New York Times Co. v. Sullivan, 376 U.S. 254, 279 (1964). Individual website employees are unlikely to face repercussions for playing it safe but could face ramifications for allowing content that later leads to litigation expenses. Whether or not websites believe a potential lawsuit is meritorious, they will often default to removal because of the potential costs of litigation or an adverse result.103×103. Zeran, 129 F.3d at 333. Even websites, like Facebook, that can âaffordâ high moderation and litigation costs would still prefer to avoid them, and this judgment will likely influence their moderation. Therefore, in the words of New York Times, websites would tend to permit âonly statements which âsteer far wider of the unlawful zone.ââ104×104. 376 U.S. at 279 (quoting Speiser v. Randall, 357 U.S. 513, 526 (1958)).
More generally, some websites might decide not to allow entire categories of content that will be more likely to expose them to liability. For example, politically controversial speech or business and product reviews may be more likely to lead to defamation actions than more mundane content.105×105. Balkin, supra note 38, at 436. Or bloggers might decline to include a comment section.106×106. Id.
Worse still, some websites might never launch.107×107. Id. Because of their business models, perhaps to focus solely on particularly controversial content, the anticipated costs of moderation and litigation could prevent them from even securing capital or launching.108×108. See Matthew Le Merle et al., Booz & Co., The Impact of U.S. Internet Copyright Regulations on Early-Stage Investment 19 (2011), https://www.strategyand.pwc.com/media/uploads/Strategyand-Impact-US-Internet-Copyright-Regulations-Early-Stage-Investment.pdf [https://perma.cc/FP3E-NW3C]; Jerry Berman, Policy Architecture and Internet Freedom, Law.com: The Recorder (Nov. 10, 2017, 3:53 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/policy-architecture-and-internet-freedom/ [https://perma.cc/T9C5-8PNB] (âWithout § 230 . . . speech would be limited and new applications might never have emerged if required to finance costly legal overhead to do business on the Internet.â). This issue might be termed complete collateral censorship â where an intermediary fails to come into existence because of a fear of being held liable for the speech of others. Various websites credit § 230 with their very existence.109×109. See, e.g., Adi Kamdar, CDA 230 Success Cases: Wikipedia, Electronic Frontier Found. (July 26, 2013), https://www.eff.org/deeplinks/2013/07/cda-230-success-cases-wikipedia [https://perma.cc/6CTN-A33M].
Additional collateral censorship will result from mistakes. Because the imposition of liability would lead to more moderation and removal, websites are more likely to make mistakes in removal decisions. Websites may make technical mistakes (perhaps from a userâs accidental clicking of a âreportâ button). But given the difficulty of factual investigation, they are also likely to make fundamental mistakes about the factual basis of defamation claims â removing content based on incorrect understandings of the veracity of usersâ allegations. Moreover, websites will make mistakes of law. Fearing these mistakes, websites may default to adherence to the strictest state laws, thus censoring more speech and allowing the most speech-restrictive states to govern the entire internet. If websites employ algorithms to shoulder this legal burden, they expose themselves to the inaccuracies in those programs.
Due to the problems noted above, opportunistic lawyers or other individuals will attempt to exploit websitesâ vulnerabilities. Businesses and individuals that do not like posts about them on websites will request that the posts be taken down whether they are defamatory or not.110×110. See, e.g., Christina Mulligan, Technological Intermediaries and Freedom of the Press, 66 SMU L. Rev. 157, 182 (2013); Wu, supra note 91, at 301; Sieminski & Hogan, supra note 91. Individuals and businesses hoping to have material taken down will learn how to manipulate intermediaries.111×111. Perzanowski, supra note 86, at 858 n.172. Websites would face difficulties dealing with even good faith reports of defamation, let alone handling individuals who allege defamation as a cynical tactic to remove the content they dislike.112×112. See id. If a business wants to hide a bad review or an individual hopes to conceal a piece of truthful but unflattering information, the business or individual can notify the website that the content is false and threaten to sue. Even if a website does not immediately capitulate, it will incur large costs investigating these claims and may reach the incorrect conclusion. During the investigation period, the website may take down the content, which would also inhibit speech. For potentially defamatory posts, websites might decide to implement a delay so that they can prescreen content for defamation.
For these reasons, notice-based liability is problematic. As thenâChief Judge Wilkinson explained in Zeran, âliability upon notice has a chilling effect on the freedom of Internet speechâ â[b]ecause service providers would be subject to liability only for the publication of information, and not for its removal, [so] they would have a natural incentive simply to remove messages upon notification, whether the contents were defamatory or not.â113×113. Zeran v. Am. Online, Inc., 129 F.3d 327, 333 (4th Cir. 1997); see Sieminski & Hogan, supra note 91.
Third, the nondefamatory speech lost to collateral censorship is often valuable. In cases like Reno v. ACLU,114×114. 521 U.S. 844 (1997). the Supreme Court has demonstrated an appreciation for the vital role internet speech plays in modern society. The Court lauded the then-nascent internetâs âvast democratic forums.â115×115. Id. at 868. It described the internet as a âdynamic, multifaceted category of communication includ[ing] not only traditional print and news services, but also audio, video, and still images, as well as interactive, real-time dialogue.â116×116. Id. at 870. It noted that âany person with [internet access] can become a town crier with a voice that resonates farther than it could from any soapbox.â117×117. Id. In addition, the Court observed that because of the tremendous scale of the internet, speech regulations that threatened liability for certain acts could limit many types of protected speech.118×118. Id. at 875â78. More recently, in Packingham v. North Carolina,119×119. 137 S. Ct. 1730 (2017). the Supreme Court held unconstitutional a statute that prohibited registered sex offenders from accessing social networking websites, like Facebook or Twitter, that allow children to have accounts.120×120. Id. at 1733, 1738. The Court explained that âto foreclose access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.â121×121. Id. at 1737. It deemed the internet âthe most important place[] (in a spatial sense) for the exchange of views.â122×122. Id. at 1735 (emphasis added). The Court continued that an understanding of the internet âinforms the analysisâ123×123. Id. at 1736. of a law in question:
Social media offers ârelatively unlimited, low-cost capacity for communication of all kinds.â On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. . . . In short, social media users employ these websites to engage in a wide array of protected First Amendment activity on topics âas diverse as human thought.â
. . . While we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be.124×124. Id. at 1735â36 (citations omitted).
The Supreme Courtâs veneration of internet speech suggests special caution before permitting laws that limit it.125×125. Id.
More specifically, the nondefamatory speech lost to collateral censorship will often be vulnerable speech.126×126. If websites default to accepting a mere prima facie case for defamation as grounds to censor content, some of the collaterally censored speech will be communication that fits into the various well-justified exceptions and privileges to defamation claims. Individuals who want certain speech taken down sometimes file illegitimate content takedown requests.127×127. See Jeffrey Cobia, Note, The Digital Millennium Copyright Act Takedown Notice Procedure: Misuses, Abuses, and Shortcomings of the Process, 10 Minn. J.L. Sci. & Tech. 387, 391 (2009). This dynamic allows the majority to suppress minority views or could constitute a potential hecklerâs veto.128×128. Brett G. Johnson, The Hecklerâs Veto: Using First Amendment Theory and Jurisprudence to Understand Current Audience Reactions Against Controversial Speech, 21 Comm. L. & Polây 175, 176â77 (2016). The speech that is the first to be collaterally censored may be the most vulnerable and least likely to appear through alternative channels. At its core, the First Amendment seeks to protect unpopular views129×129. See John Hart Ely, Democracy and Distrust 112 (1980); Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 641 (1994). â unobjectionable views are less frequently jeopardized. As noted above, because of the cost of additional content moderation, some websites may turn to algorithms for assistance. Yet recently, algorithms have fared no better in protecting marginalized speech: Googleâs artificial intelligence moderation system that seeks to highlight toxic speech accidentally flags sentences such as âI am a gay woman.â130×130. Elliot Harmon & Jeremy Gillula, Stop SESTA: Whose Voices Will SESTA Silence?, Electronic Frontier Found. (Sept. 13, 2017), https://www.eff.org/deeplinks/2017/09/stop-sesta-whose-voices-will-sesta-silence [https://perma.cc/A5KG-NE8A].
Other vulnerable speech includes speech of little immediate personal benefit but that, when part of a community, provides a large public benefit â such as business reviews or Wikipedia edits. Some of the most socially beneficial forms of speech that can pose defamation concerns are consumer reviews, such as those on Yelp. These websites have flourished because of § 230.131×131. CDA § 230 Success Case: Yelp, Electronic Frontier Found., https://www.eff.org/issues/cda230/successes/yelp [https://perma.cc/THP3-V64S]. Facing liability, review websites would become more cautious and manipulable, and therefore less accurate, thus decreasing competition. Nonprofits like Wikipedia also depend on § 230 to freely provide accurate content.132×132. Kamdar, supra note 109.
Ultimately, the threat of defamation liability will often cause websites to seek to avoid liability by overcensoring valuable user speech.
The second area of First Amendment analysis concerns the governmentâs interest underlying defamation law. In Gertz, the Court held that the âlegitimate state interest underlying the law of libel is the compensation of individuals for the harm inflicted on them by defamatory falsehood.â133×133. Gertz v. Robert Welch, Inc., 418 U.S. 323, 341 (1974). However, the Court articulated a rationale for the compensation interest that spoke to a broader purpose: each individual has the âright to the protection of his own good name.â134×134. Id. (citing Rosenblatt v. Baer, 383 U.S. 75, 92 (1966) (Stewart, J., concurring)). This reputational rationale is broader than the interest in compensation because it undergirds a larger swath of defamation law. For example, a reform that would increase only the deterrent effect of defamation law could not be supported by the compensation interest because that reform would not necessarily increase the likelihood of compensation; however, it would certainly promote the reputational rationale by decreasing the prevalence of defamation through deterrence.
In general, a reputational interest is a much more natural understanding of the justification for defamation law. The Court should adopt reputation protection, which involves deterrence, not mere compensation, as the interest justifying defamation laws. As the Court explained in Rosenblatt v. Baer,135×135. 383 U.S. 75. âunderl[ying] the law of defamation [is an] interest in preventing and redressing attacks upon reputation.â136×136. Id. at 86 (emphasis added). Would one prefer an ideal world in which every victim of defamation was compensated or one in which defamation law deterred all defamation before it took place, thus protecting all individualsâ reputations? More realistically, the objective of defamation law should be reducing instances of defamation as much as possible while compensating individuals who are nonetheless defamed.137×137. To be sure, this consideration might require a tradeoff between overall reductions in defamation and increases in the proportion of those who are defamed but do not receive compensation. Analogously, the interest underlying âbatteryâ is not merely securing a remedy for those who have been battered but also reducing the occurrence of that tortious action.138×138. See John C.P. Goldberg, Twentieth-Century Tort Theory, 91 Geo. L.J. 513, 525 (2003). This distinction matters because it expands the denominator: if one contemplates a broader interest than compensation alone, different laws may pass or fail constitutional muster. For instance, as argued below, § 230 does limit compensation, but the law mitigates this limitation because it encourages websites to remove defamation. The net effect on a general reputational interest is greater than the effect on compensation. When a legitimate interest is artificially narrowed, it can promote the constitutionality of laws that could fail as rights infringing under a more naturally broad interest.139×139. In a sense, allowing for a broader view of the interest at stake requires policies that can accomplish the same âlevelâ of interest fulfillment but with less rights abridgement.
Intermediary defamation liability does not serve this interest well because it would not significantly reduce defamation beyond the status quo. First, in the status quo, many websites moderate their content and remove defamatory content even without the threat of intermediary liability.140×140. Klonick, supra note 42, at 1601, 1615. Some may wonder: If websites will collaterally censor when facing intermediary liability, why will websites not be effective in taking down defamation? And if websites already moderate, why is there not already collateral censorship such that imposing liability will not make a dramatic difference? This Note acknowledges both that intermediary liability may lead some websites to take down marginally more defamation and also that websites already limit the constitutionally protected speech of their users. However, it also posits that while websites already target defamation, increased liability will increase the proportion of protected speech that is removed in an effort to reduce defamation. Websites already make significant efforts to remove defamation. As liability and then moderation increase, there would be diminishing marginal returns to defamation reduction and correspondingly increasing marginal collateral censorship of constitutionally protected content. In order to be sure all defamatory content was removed, websites would remove much lawful content. Therefore, as the interest in defamation decelerates, collateral censorship accelerates. If intermediary liability effectively removes a significant amount of defamation, it would come at the cost of very dramatic collateral censorship. On the other hand, a websiteâs independent decision to take down content that is not illegal is not collateral censorship but merely an editorial decision (at least from a constitutional perspective). They make this decision because of âa sense of corporate social responsibility, but also, more importantly, because their economic viability depends on meeting usersâ speech and community norms.â141×141. Id. at 1625. Websites have significant existing incentives to remove defamatory material. And, â[b]ecause they seek to please their customers, intermedi-aries are more likely than courts to develop content standards that conform to basic community values.â142×142. Kosseff, supra note 40, at 153. Second, some defamation may be persistent in the face of intermediary liability. Consider, for instance, the extreme amount of copyright infringement that persists on the internet even though federal law imposes liability on intermediaries for copy-right infringement committed by their users.143×143. See Annemarie Bridy, Is Online Copyright Enforcement Scalable?, 13 Vand. J. Ent. & Tech. L. 695, 709 (2011). Persistent users will often be able to disseminate whatever information they want by using multiple accounts, anonymous accounts, or other websites. Certain bad-actor websites will also persist by remaining outside the jurisdiction of U.S. courts.144×144. See Alan M. Trammell & Derek E. Bambauer, Personal Jurisdiction and the âInterwebs,â 100 Cornell L. Rev. 1129, 1188â89 (2015). Third, intermediary liability could lead to less of a reduction in defamation because some websites will meet the âModeratorâs Dilemmaâ145×145. Eric Goldman, Congress Probably Will Ruin Section 230 This Week (SESTA/FOSTA Updates), Tech. & Marketing L. Blog (Feb. 26, 2018), https://blog.ericgoldman.org/archives/2018/02/congress-probably-will-ruin-section-230-this-week-sestafosta-updates.htm [https://perma.cc/NCH4-JRDG]. posed by Stratton Oakmont by taking a more hands-off approach to content. In other words, instead of attempting to avoid liability by overcensoring their users, they will reduce the screening they engage in to avoid acquiring knowledge that might subject them to liability.146×146. Zeran v. Am. Online, Inc., 129 F.3d 327, 333 (4th Cir. 1997). If they otherwise would have moderated content and removed some defamation, this choice renders defamation law less effective.
Those who have been defamed still retain various tools that may mitigate the harms of defamation. Section 230 does not prevent a defamed person from engaging in counterspeech.147×147. Perzanowski, supra note 86, at 860â61. Nor does it prevent plaintiffs from suing the party that originally defamed them.148×148. See Zeran, 129 F.3d at 330 (âNone of this means, of course, that the original culpable party who posts defamatory messages would escape accountability.â). In fact, an empirical study found that in a majority of § 230 cases, plaintiffs âwere able to identify and sue the original source of the content that caused them harm.â149×149. Ardia, supra note 24, at 382; see id. at 486â88, 493. Additionally, the same study revealed that even if potential plaintiffs do not recover in court, they are often successful in getting the content in question removed.150×150. Id. at 489. While these options are sometimes of limited efficacy, they are at minimum marginally mitigating.
The considerable collateral censorship that intermediary liability would cause is not worth the meager benefit to the reputational interest such liability might provide. The fact that all plaintiffs could not achieve compensation is insufficient to reject this rule â New York Times has the same consequence. As the Court there explained, âerroneous statement is inevitable in free debate, and . . . it must be protected if the freedoms of expression are to have the âbreathing spaceâ that they âneed . . . to survive.ââ151×151. New York Times Co. v. Sullivan, 376 U.S. 254, 271â72 (1964) (second omission in original) (quoting NAACP v. Button, 371 U.S. 415, 433 (1963)). The Court creates broad prophylactic rules, âbreathing space,â to protect the freedom of expression through intentional overenforcement of the constitutional right.152×152. See Fallon, supra note 64, at 63; Levinson, supra note 75, at 902 & n.186. Gertz consciously devised an âaccommodation of the competing values at stake in defamation suits,â153×153. Gertz v. Robert Welch, Inc., 418 U.S. 323, 348 (1974). and âattempt[ed] to reconcile state law with a competing interest grounded in the constitutional command of the First Amendment.â154×154. Id. at 349. To this analysis must be added the Courtâs more recent statements on the importance of internet speech and the need for restraint in regulating it.155×155. See Packingham v. North Carolina, 137 S. Ct. 1730, 1735â36 (2017). Given the new ârelationship between the First Amendment and the modern Internet,â the Court has warned that it âmust exercise extreme caution before suggesting that the First Amendment provides scant protection.â156×156. Id. at 1736. For the First Amendment, intermediary liability imperils a significant amount of constitutionally protected speech through the collateral censorship explained above. Collateral censorship may be even more troublesome than the self-censorship feared in New York Times because the censored speakers do not themselves decide when to refrain from speaking.157×157. Wu, supra note 91, at 304. For the interest in enforcing defamation law, imposing intermediary liability will be of limited utility because websites already moderate content, much defamation will persist in the face of intermediary liability, and intermediary liability might encourage some websites to decrease their moderation. The Court must require confidence in the benefits of the defamation law, especially when the speech at stake may be so valuable. Here, the gains for defamation law are doubtful whereas the harms to speech are significant. Therefore, under the Courtâs defamation, collateral censorship, and internet speech case law, the First Amendment requires the prophylactic rule of § 230.
Applying the First Amendment in the untrodden ground of (1) internet (2) intermediary (3) defamation liability combines three areas of doctrine. By (1) recognizing the value and vulnerability of internet speech (Reno and Packingham), (2) identifying the First Amendment harm â collateral censorship â that intermediary liability imposes (Smith), and (3) employing the framework the Court uses to evaluate the constitutionality of defamation laws (New York Times and Gertz), the optimal constitutional rule comes into focus. To be sure, Packingham merely lauded internet speech, Smith rejected only strict liability, and New York Times calibrated a mental state (actual malice) and not secondary liability. However, § 230âs rule is the best extension of these precedents into the new context of internet intermediary defamation, for the reasons detailed above.
By way of framing potential critiques of § 230, as Cathy Gellis brilliantly explains, â§ 230 is potentially in jeopardy of becoming a victim of its own success,â because its benefits are less salient than are particular instances of defamation.158×158. Gellis, supra note 25. As she notes, â§ 230 has done so well creating a new normalcy that itâs much harder to see just how much it has allowed to go right,â such that âwhen things do go wrong . . . we are always at risk of letting our outrage at the specific injustice cause us to be tempted to kill the golden goose by upending something that on the whole has enabled so much good.â159×159. Id.
Some might argue that § 230 unacceptably creates a different constitutional standard for online, versus offline, speech.160×160. See Jenna K. Stokes, The Indecent Internet: Resisting Unwarranted Internet Exceptionalism in Combating Revenge Porn, 29 Berkeley Tech. L.J. 929, 930â31 (2014). However, the proposed rule would be equally desirable in truly analogous offline contexts. More importantly, the Court has been willing to set different rules under the First Amendment for different forms of media based on their different factual contexts.161×161. See Thomas W. Hazlett et al., The Overly Active Corpse of Red Lion, 9 Nw. J. Tech. & Intell. Prop. 51, 62 (2010). The Court treats the regulation of adult content, for example, differently across different types of media such as newspapers, broadcast, and cable.162×162. See id. (comparing Reno v. ACLU, 521 U.S. 844 (1997), with United States v. Playboy Entmât Grp., Inc., 529 U.S. 803 (2000)). More broadly, much of this line drawing is based on sound factual distinctions between various types of media. Here, for instance, internet intermediary liability would be less successful than offline intermediary liability in reducing defamation and is therefore less constitutionally desirable. And, as the Court has explained, given the relatively new ârelationship between the First Amendment and the modern Internet,â it âmust exercise extreme caution before suggesting that the First Amendment provides scant protection.â163×163. Packingham v. North Carolina, 137 S. Ct. 1730, 1736 (2017).
Some critics of § 230 argue that the statute has unacceptable distributional consequences. Professor Mary Anne Franks, in particular, has written thoughtfully about the concern that § 230 may shield defamation that âdisproportionately burden[s] vulnerable private citizens including women, racial and religious minorities, and the LGBT community.â164×164. Mary Anne Franks, Moral Hazard on Stilts: Zeranâs Legacy, Law.com: The Recorder (Nov. 10, 2017, 8:34 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/moral-hazard-on-stilts-zerans-legacy/ [https://perma.cc/RE3X-SHHA]. This Note accepts this claim. However, First Amendment doctrine is not necessarily concerned with disproportionately distributed harm165×165. See R.A.V. v. City of St. Paul, 505 U.S. 377, 391â93 (1992). and may be particularly skeptical of laws explicitly aimed at remedying it.166×166. See Am. Booksellers Assân v. Hudnut, 771 F.2d 323, 324â25 (7th Cir. 1985) (striking down an ordinance prohibiting certain explicit material considered as discriminating against women), affâd mem., 475 U.S. 1001 (1986). Yet the First Amendment should be particularly skeptical of laws that disproportionately hurt the speech of certain marginalized groups. Intermediary liability has this potential, as it would provide a hecklerâs veto to those who object to minority speech. Content moderation has âshut down conversations among women of color about the harassment they receive online,â âcensor[ed] women who share childbirth images in private groups,â and âdisappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya.â167×167. Corynne McSherry et al., Private Censorship Is Not the Best Way to Fight Hate or Defend Democracy: Here Are Some Better Ideas, Electronic Frontier Found. (Jan. 30, 2018), https://www.eff.org/deeplinks/2018/01/private-censorship-not-best-way-fight-hate-or-defend-democracy-here-are-some [https://perma.cc/ULV6-UX2Z]. Intermediary liability would increase websitesâ incentive to cautiously accede to takedown requests targeting vulnerable private citizens. Liability may increase the use of moderation algorithms, and â[d]ecisions based on automated social media content analysis risk further marginalizing and disproportionately censoring groups that already face discrimination.â168×168. Duarte et al., supra note 95, at 4. While marginalized communities may be particularly vulnerable to online defamation, they are also particularly vulnerable to the collateral censorship that would result from intermediary liability. In addition, even if a repeal of § 230 would generally benefit defamation plaintiffs, it is unclear whether these plaintiffs would benefit. Given the cost of litigation, our most marginalized citizens are the ones least likely to be able to take advantage of a new liability regime. Most importantly, as argued above, collateral censorship is a major threat to vulnerable voices online. Therefore, it is at best uncertain which regime has superior distributional consequences.
IV. The Implications of a Constitutional Rule
Several implications flow from the idea that the First Amendment requires internet intermediary liability protection. First, regardless of whether one is an internet exceptionalist,169×169. See generally Mark Tushnet, Internet Exceptionalism: An Overview from General Constitutional Law, 56 Wm. & Mary L. Rev. 1637 (2015). this Note demonstrates how constitutional questions regarding the internet occasionally require unique answers at least due to dramatically changed factual circumstances. The volume of internet speech and its resistance to regulation produce a potentially surprising result for defamation law. Second, understanding § 230 as being equal to the constitutional requirement helps explain why courts have generally taken a broad view of the statute and consistently held against defamation claims. This realization also might explain why courts at first provided broad protection under the statute against defamation claims and then began to grow more reluctant in cases where speech seems less directly implicated, such as failure-to-warn claims. Third, recognizing the First Amendment as requiring § 230 shows how § 230 may be reminiscent of other federal statutes that would now likely constitute the rule required by the Constitution.170×170. Consider the laws that govern administrative or civil procedure and procedural due process, the fair use doctrine and the First Amendment, and civil or voting rights laws and the Reconstruction Amendments. This type of statute demonstrates how Congress can enforce constitutional law prior to the courts and also how statutory experimentation can yield enduring norms. Fourth, in new cases on the edge of § 230âs protections, this First Amendment underpinning provides a rationale, perhaps via constitutional avoidance, for interpreting immunity broadly. Fifth, § 230 covers more claims than defamation. If the First Amendment requires intermediary liability protection from defamation suits, other claims may also be implicated. Sixth, though this Note argues for shielding certain editorial decisions of websites, this legal argument should not preclude public debate regarding their practices. As discussed, many websites laudably expend resources seeking to remove defamation. But many websites should make more strides, seeking to provide a âfair opportunity to participateâ and âdirect accountability.â171×171. Klonick, supra note 42, at 1603. Finally, if Congress amends or repeals § 230,172×172. Eric Goldman, Senateâs âStop Enabling Sex Traffickers Act of 2017â â And Section 230âs Imminent Evisceration, Tech. & Marketing L. Blog (July 31, 2017), http://blog.ericgoldman.org/archives/2017/07/senates-stop-enabling-sex-traffickers-act-of-2017-and-section-230s-imminent-evisceration.htm [https://perma.cc/KF9B-TN7K]; see also Elliot Harmon, Amended Version of FOSTA Would Still Silence Legitimate Speech Online, Electronic Frontier Found. (Dec. 11, 2017), https://www.eff.org/deeplinks/2017/12/amended-version-fosta-would-still-silence-legitimate-speech-online [https://perma.cc/2YB4-ZYWL]. courts should be willing to step in with the First Amendment if warranted.
This Note finds for § 230 enduring constitutional footing.173×173. Some may argue that instead of basing this protection in First Amendment doctrine, the protection should be internal to defamation law. As a policy matter, these changes may be desirable as an addition to constitutional doctrine. However, crafting this constitutional rule is preferable to a solution based purely on the common law of torts. For one, a national constitutional rule gives more confidence to intermediaries and reduces litigation costs â thus decreasing the chance of collateral censorship. Given the risk of collateral censorship and meager gains in stopping defamation that an alternate rule would produce, the First Amendment cannot permit holding websites liable for the defamation of their users. When and if the time comes, courts should be willing to recognize the importance of this protection and hold it provided for by the Constitution.
Â