First Amendment Blog Essay

The Amendment the Court Forgot in Twitter v. Taamneh

There was something conspicuously absent from the courtroom when the Supreme Court heard arguments in Twitter v. Taamneh last week. For decades now, the Court has been accused of weaponizing, or “Lochnerizing,” the First Amendment by extending free speech protections so far into so many areas of ordinary law that it has become something of an all-purpose deregulatory device. And yet, last Wednesday, in what could be an incredibly consequential case for freedom of speech online, members of the Court appeared to forget almost entirely about the existence of the First Amendment altogether. Indeed, the word “speech” was not uttered a single time during oral argument in Taamneh, and the First Amendment came up only once, in passing.

And yet, Taamneh is a case with speech at its center. At issue is whether tech companies should bear liability for the use of their platforms by terrorist groups. This could incentivize platforms to take down vastly more speech — and not just terrorist speech — than they currently do, in order to avoid even the chance of liability. For this reason, Taamneh, like its companion case Gonzalez v. Google, could dramatically reshape the internet. These cases have generated intense interest among free speech litigators, civil rights groups, and scholars, in part because the stakes of a bad decision for digital free expression are high in either case. Yet on the case’s free speech ramifications, the arguments in Taamneh were unsettlingly silent.

The question Taamneh raises is whether platforms can be found liable for aiding and abetting terrorist acts under section 2333 of the Anti-Terrorism Act (ATA). The Taamneh plaintiffs’ theory is that Twitter aided and abetted a 2017 terrorist attack in Turkey because it had general knowledge that ISIS used its platform for organizing and recruitment purposes and did not do enough to remove all ISIS content. This, the plaintiffs argued, helped ISIS become “the most feared terrorist group in the world” and substantially assisted the group in carrying out the 2017 (and presumably other) attacks.

From this description, it should be clear that the Taamneh plaintiffs’ theory of aiding and abetting liability is incredibly broad; the alleged causal chain between Twitter’s actions and the attack in Turkey is extremely attenuated. No one disputes that ISIS made effective use of Twitter for purposes of propaganda and recruitment. But, likewise, no one disputes that Twitter had an extensive content moderation program dedicated to removing terrorist content. Furthermore, theplaintiffs provided no evidence that the ISIS member who carried out the 2017 attack ever had a Twitter account, or that Twitter had knowledge of any particular ISIS content they failed to remove, or that the service was used in the planning of the particular attack.

The argument in Taamneh was therefore very focused on questions of statutory construction and general tort principles, with the Justices pushing on what level of knowledge and assistance is necessary for a finding of liability under the ATA. Much of this back-and-forth took the form of testing different analogies: Was what the platforms did like a bank that gives a terrorist a bank account, knowing that they are a member of a foreign terrorist organization? Or like a gun dealer selling a known gang member a gun? Or like a Chinese restaurant that sells that gangster a meal? The Justices kept throwing off different hypotheticals as they tried to work through the difficult questions of statutory interpretation raised by the convoluted language of the ATA.

But to focus solely on the issues of statutory interpretation is a mistake. Platforms are not like banks, or gun dealers, or Chinese restaurants in one important respect: they truck in speech, not money or other goods. And so while it may be true, as one scholars’ brief argued, that Congress intended to create “sweeping secondary liability” under the ATA, there are also important free speech interests implicated by the question of platform liability that are not implicated — as directly at least — when Congress makes banks liable for aiding and abetting terrorism or regulates gun dealers. A broad interpretation of aiding and abetting under the ATA as applied to social media platforms — in particular, the very broad interpretation the Taamneh plaintiffs argue for — would almost certainly have a profound (and negative) impact on the diversity of speech online.

Perhaps many would shrug this off if it were simply about terrorists’ speech interests. But when thinking about what liability intermediaries should be exposed to for carrying others’ speech, it is important to be cognizant of platforms’ incentive structures and how this impacts their risk tolerance for carrying other non-mainstream speech. As Professor Seth Kreimer observed nearly two decades ago, online intermediaries are the “weakest link” in the complex system that protects free expression in the public sphere because they have limited incentives to avoid overcensorship. The marginal benefit speech intermediaries receive from hosting any particular piece of speech is usually minimal, but liability of the kind the plaintiffs are arguing for in Taamneh makes the risk of failing to remove even a single needle in the haystack very high. As Daphne Keller put it, “[t]wenty years of experience . . . tells us that when platforms face legal risk for user speech, they routinely err on the side of caution and take it down.” Combine these incentive structures with the difficulties of moderating content at the scale of a major internet platform (hundreds of millions of tweets a day) and the inability of artificial intelligence content moderation tools to take context into account, and the result is, almost certainly, the loss of a great deal of valuable speech from the internet. In particular, it would result in the loss of valuable speech from marginalized and vulnerable communities.

Indeed, platforms are already incentivized to take down valuable speech by the specter of liability that currently hangs over them as a result of the legal uncertainty about the reach of the ATA, and as a result of other countries’ laws. For example, platforms routinely and disparately suppress content from Palestinians and their supporters, including content that alleges or describes human rights abuses. In one case, Instagram removed hashtags about the Al-Aqsa Mosque in Jerusalem because its content moderation system mistakenly associated the site’s name with a terrorist organization. In another instance, YouTube removed tens of thousands of hours of videos that documented war crimes in Syria. There are many possible reasons for such mistakes, but one is surely that legal risk incentivizes platforms to overmoderate in areas or languages that have higher volumes of content from groups classified as foreign terrorist organizations. That is: many nonterrorists are silenced because they speak Arabic, or live in Gaza, or because their content is flagged by an algorithm for removal because it possesses some other feature that is, wrongly or rightly, associated with terrorism.

Despite the very obvious implications of the plaintiffs’ reading of the ATA for speech online, no one — none of the Justices, nor Twitter’s lawyer Seth Waxman, nor the Deputy Solicitor General who argued as amicus curiae in support of Twitter — spent any time discussing the free speech implications of the case. In fact, during the almost three hours of argument, the only person to raise the possibility that the First Amendment might have something to say about the application of aiding and abetting liability to social media platforms was the plaintiffs’ lawyer — who invoked it as a reason why the Court shouldn’t be too concerned about the free speech implications of a broad reading of the ATA. The lawyer, Professor Eric Schnapper, was asked by Justice Kavanaugh whether his reading of the ATA would mean television stations like CNN would be exposed to liability for interviews they aired of known terrorists like Osama bin Laden. Schnapper understood that the only good response to this question was a no. But bereft of a principled interpretive reason why liability would not extend so far, he explained that “the First Amendment is going to solve that” problem. That is — don’t worry about the free speech implications of imposing liability now; we can sort those out later. Justice Kavanaugh did not press the point, the argument moved on, and the free speech implications of the case never came up again.

The disregard for free speech interests in the Taamneh arguments was even more disquieting when it came to discussion about the relationship between platforms and governments. It was repeatedly suggested throughout the arguments that the ATA’s knowledge standard would be satisfied if a government actor notified a platform about specific accounts it suspected were involved in terrorist activity. Seth Waxman volunteered (rather surprisingly) that if “the Turkish police, the Istanbul police come and say there are 10 accounts, 10 Twitter accounts that appear to be involved in planning some sort of terrorist attack here” and Twitter did not take them down, that would be enough to have a culpable level of knowledge under the ATA. For anyone who has followed the history of online-speech debates, this was a jaw dropping moment.

Free speech concerns should be at their highest when government actors are involved in the suppression of speech. One of the longest-standing free speech concerns of the platform era is the fear of government jawboning — that is, that state actors will use informal pressure on social media companies to get them to censor speech without having to comply with constitutional constraints on state power. This is not just one of the oldest concerns about free speech online — it is also one of the most prominent today. Just a few weeks before the arguments in Taamneh, the House Oversight Committee held a six-hour hearing about whether Twitter and the government colluded to suppress a New York Post story about Hunter Biden’s laptop in the lead up to the 2020 U.S. presidential election. Waxman’s reference to Turkish law enforcement authorities should have made the potential dangers of this reading of the ATA even starker. Indeed, at virtually the same moment that Waxman made his concession in the Taamneh arguments, the Turkish government was bringing charges of terrorism against reporters and suppressing criticism of its handling of the response to the recent earthquakes as “disinformation.”

But if any members of the Court had concerns about platforms taking marching orders from governments to suppress speech based on pure government say-so, they did not show it in the Taamneh hearings. Indeed, they seemed more concerned about the opposite — that a platform might not heed all the government warnings they got about terrorist speech.

Of course, Taamneh is not a constitutional case, but a statutory one. And, although civil society organizations raised First Amendment arguments in the brief they filed in support of Twitter, the people in the courtroom showed little interest in ventilating the very obvious constitutional implications of the case.

One possible explanation is that the Justices assumed that these issues could be more properly addressed by a direct First Amendment challenge to the aiding and abetting provisions of the ATA at some future date. It is not obvious that this is a good assumption to make, however. An innocent social media user whose post gets taken down because the platforms want to avoid liability under the ATA will likely not know the reason their post is removed, let alone have standing to challenge the platform’s decision in court. And while CNN has an obvious interest in using the First Amendment to defend itself if it is sued for airing an interview with bin Laden, it’s much less obvious that platforms have adequate incentives to bring such arguments on behalf of their users’ speech, for all the reasons laid out above. Eric Schnapper may believe the “First Amendment will solve” the free speech problems raised by his argument, and perhaps members of the Court believe it too, but it is not at all clear that this is true as a practical matter.

There is another possible, and more dispiriting, explanation for why free speech interests did not come up last week. The total lack of interest the Court showed towards the free speech questions raised by Taamneh might just reflect the Justices’ assumption that the repression of foreign speech — particularly terrorist-adjacent foreign speech — simply does not raise important-enough First Amendment concerns. This, of course, is the implication of the Court’s infamous 2010 decision in Holder v. Humanitarian Law Project. It is hard to believe that if this case concerned the speech of U.S. citizens, the First Amendment could have been so studiously ignored.

Whatever the reason, the lack of any engagement with the First Amendment produced a set of oral arguments that failed to grapple with the true nature of the problem before the Court — in essence, how to protect freedom of speech online while also protecting against terrorism. That is no easy tightrope to walk, but it is the core difficulty that this case presents. For all the First Amendment expansionism the Court has overseen in recent years, the Taamneh arguments reflected an unduly constricted view of the Court’s role in safeguarding free speech values.