“Sunlight is said to be the best of disinfectants”1 — or so the adage goes. At the time, Justice Brandeis’s words described a framework for limiting the monopoly power of investment banks and wealth trusts through compelled disclosures.2 Transparency, he reasoned, “will aid the investor in judging of the safety of the investment” by reducing information asymmetries in the marketplace.3 A century later, the maxim seems to have found its way into social media–regulation circles, with think tanks and regulators calling for transparency in how these companies design their algorithmically curated environments.4 Those advocating for regulation in the space argue that these companies’ abilities to control “troves” of sensitive private data,5 and their inabilities to regulate dangerous speech,6 demand government oversight.7 In 2023, Senator Chris Coons sought to answer such calls with the (re)introduction of the Platform Accountability and Transparency Act8 (PATA) — a law granting academics and researchers broad access to the internal datasets of social media platforms that are covered by the bill’s scope.9 However, while calls for regulation may be warranted, Congress should be mindful of how it answers. As it stands, PATA likely suffers from constitutional infirmities that raise the specter of government censorship. Instead, lawmakers should create public-private partnerships with platform companies that focus on promoting self-regulation and industry-wide standards for user safety, transparency, and accountability.
Transparency is the buzzword of the day in social media regulation circles.10 And rightfully so. Like the wealth trusts of Justice Brandeis’s day, platform companies play the role of gatekeepers11 in the digital public square,12 yet we know little about their black box operations.13 Platforms like TikTok and Instagram “offer immeasurable opportunities to connect public leaders with constituents, businesses with consumers, and communities across the globe.”14 Yet they have been at the center of very public catastrophes, including: a genocide in Myanmar,15 a terrorist attack in Christchurch,16 and a riot at the U.S. Capitol on January 6.17 They evade liability,18 thanks, in part, to immunity statutes like Section 230 of the Communications Decency Act of 199619 and the difficulty in drawing a connection between the platforms’ design choices20 and the real-world harm they allegedly create.21 Algorithmic transparency laws — regulations that require the “disclosure of information about algorithms to enable monitoring, checking, criticism, or intervention by interested parties”22 — have been proposed as a way to begin drawing those connections, allowing regulators and researchers to more fully understand how the platforms rank and amplify certain content.23 PATA was born out of this backdrop.
PATA was first introduced during the 117th Congress in December 2022 by U.S. Senator Chris Coons,24 and was billed as a “multipronged” approach to “create[] new mechanisms to increase transparency around social media companies’ internal data.”25 It required that the Federal Trade Commission (FTC) and the National Science Foundation (NSF) promulgate standards to ensure qualified researchers26 can develop qualified research projects27 and gain access to on-platform information in coordination with social media companies.28 The bill languished at the end of the 117th Congress29 but was reintroduced in the 118th.30 Now, the bill is awaiting review by the Senate Committee on Commerce, Science, and Transportation.31
PATA has three important provisions. First, the bill requires a list of mandatory dataset disclosures the platforms need to make available to the public on an ongoing basis.32 Specifically, this includes data on “[h]ighly disseminated content,”33 the platform’s ranking and design choices,34 and its content moderation practices.35 The goal is for these disclosures to give regulators and researchers a way of knowing, for example, the potential causes, “prevalence[,] and size of the problem of hate speech, disinformation, incitement, child endangerment, and the like”36 — information the lack of which hampers attempts to identify how or why specific types of content appear in someone’s newsfeed.37
Next, PATA provides researchers with an opportunity to access internal datasets for research projects approved by the NSF.38 Prior to the social media age, social scientists were able to freely use public data related to “government statistics, survey data, or other kinds of data” to observe and report on social phenomena as it was happening, but “[n]ow, most of the data, which is relevant to contemporary social problems, is locked up” in the platforms.39 PATA requires social media platforms to make that data available upon request to public interest–focused qualified researchers.40 And if they refuse, PATA provides judicial review of the platforms’ noncompliance.41
Finally, PATA provides safe harbor protections to both researchers and the platforms when data transfers occur.42 For researchers, “[n]o civil claim will lie, nor will any criminal liability accrue . . . for collecting covered information as part of a news-gathering or research project on a platform, so long as,” among other requirements, that research is in the public interest, it follows the privacy and security standards promulgated by the FTC, and it does not “materially burden the [platform’s] technical operation.”43 For platform companies, no “cause of action . . . arising solely from the release of qualified data . . . in furtherance of a qualified research project may be brought against any platform that complies with [PATA].”44
PATA has received mixed reactions. Scholars like Professor Nathaniel Persily welcome PATA,45 explaining that “[i]f you force the platforms to open themselves to outside review, it will change their behavior[;] . . . [t]hey will know they’re being watched.”46 Professor Daphne Keller has been more cautious, explaining that while she is a cheerleader for platform transparency, “in practice [it] is complicated and messy” and could lead to a reduction in “people’s legal protections from state surveillance.”47 Jim Harper, a Senior Fellow at the American Enterprise Institute, argues that “[a]n unconstrained disclosure mandate may be unconstitutional,” and could make content “moderation more difficult” or “degrade the experience for platforms’ users.”48
Though many praise PATA as a welcome legislative intervention from a historically ambivalent Congress, its constitutional implications raise some concerns. PATA’s arrival is part of a pattern of laws seeking to regulate platform companies through mandating: political advertisement disclosures,49 policies about specific viewpoints like hate speech,50 and individualized notices and appeals process accompanying their content moderation decision.51 While each of these laws has arguably advanced compelling governmental interests, some courts have ruled that they likely either compel or impermissibly burden speech.52 So, instead of legislation granting blanket transparency into platforms’ editorial practices, Congress should facilitate opportunities for public-private partnerships that enable the companies to develop self-regulated, industry-wide standards that promote user safety, transparency, and accountability.
States entered the great transparency debate well before PATA’s introduction. In 2018, Maryland passed the Online Electioneering Transparency and Accountability Act53 (OETA) to identify the source of political advertisements in response to Russia’s social media disinformation campaigns during the 2016 election.54 Washington passed a similar measure.55 Soon after, New York and California passed legislation that required platforms to document and disclose their content moderation policies and enforcement actions to combat the spread of hate speech or misinformation.56 Florida and Texas also joined the conversation, passing laws requiring platforms to publish detailed explanations about their content moderation rules.57 And Texas’s law further requires platforms to provide rights of appeal for those content moderation decisions and statistics on their content moderation practices (for example, the content area, the type of review performed, and appeal rates).58
However, many of these laws have faced constitutional scrutiny. For example, several of the laws have been challenged on the theory that they impermissibly compel speech. In Washington Post v. McManus,59 the Fourth Circuit held that Maryland’s OETA was likely unconstitutional because its disclosure and inspection requirements both compelled speech and singled out political speech.60 Platforms had to create searchable advertisement libraries on their websites with specific data about the advertisement purchaser, and had to make that data available upon request to the government, “when they otherwise would have refrained.”61 Similarly, in Volokh v. James,62 a federal district court found that plaintiffs were likely to succeed in showing that New York’s Hateful Conduct Law,63 while well-intentioned, was an unconstitutional speech compulsion because it required social media platforms to devise a hate speech policy consistent with New York’s statute, publish that policy on its website, and create a mechanism to report such content.64 Thus, “at a minimum,” the law “compel[led] Plaintiffs to speak about ‘hateful conduct’”65 and “‘depriv[ed them] of their right to communicate freely on matters of public concern’ without state coercion.”66
Even more troubling is that these same laws raise the specter of Big Brother and could create a coercive effect on platforms’ regulation of internet users’ speech.67 The McManus court reasoned that OETA’s inspection requirement, in particular, places the government in “an unhealthy entanglement with”68 platform companies because “it lacks any readily discernible limits on the ability of government to supervise” platform companies’ editorial judgments.69 Under such a regime, OETA could allow the government to “chill speech” in a manner “the Supreme Court would not countenance.”70 The same was true in Volokh, where Judge Carter recognized that New York’s Hateful Conduct Law “fundamentally implicates the speech of the [social media] networks’ users” and could easily “make social media users wary about the types of speech they feel free to engage in”71 as well as make the platform “less appealing to users who intentionally seek out spaces where they feel like they can express themselves freely.”72
Elements of PATA have the potential to raise similar First Amendment concerns. First, the bill puts forward many requirements similar to Maryland’s OETA. For example, PATA mandates an easily navigable database that hosts disclosures about the content of all advertisements on the platform, who paid for the advertisement, the intended audience, and the advertisement’s reach.73 But it also goes further. It mandates that the FTC promulgate regulations requiring the disclosure of “all consumer-facing product features that made use of recommender or ranking algorithms,”74 “signals used as inputs to the described recommender or ranking algorithms, including an explanation of which rely on user data”75; data on highly disseminated content76; “information about the extent to which . . . content was recommended”77; who supplied the content78; and much more.79 And while PATA is unlike OETA in that its disclosure requirements are seemingly content neutral80 and thus potentially deserving of a lower tier of scrutiny,81 in practice, “no law [should] subject[] the editorial process to private or official examination merely to satisfy curiosity or to serve some general end such as the public interest.”82 PATA grants regulators authority to govern what Daphne Keller calls “speech about speech”83 — that is, even though the law may seek “purely factual and uncontroversial information”84 about the platforms’ operations, those “operations” are inherently editorial practices.85 The First Amendment counsels against forcing the platforms to express words that they may not have shared of their own volition.86
Second, while some may argue that PATA will change platform behavior for the better,87 the government’s potential for impermissible oversight is cause for concern. Instead of granting broad access privileges to the government, PATA places the government in a seemingly neutral role and effectively deputizes academic researchers as inspectors with access to the platforms’ editorial processes.88 This, however, is problematic, because under PATA the government retains effective control on the parameters of access to the platforms, evoking what the McManus court identified as an “unhealthy entanglement”89 with the platform’s operations. The government defines who a researcher is.90 The government defines what a research project is.91 The government bars judicial review “regarding whether a research application will be deemed a qualified research project.”92 And though the government restrains itself from seeking access to “qualified data and information” that has been provided to “a qualified researcher,”93 and qualified research projects must meet a high standard,94 nothing prevents researchers from voluntarily providing that data to the government.
Under PATA, the government can effectively fund and sanction a politically friendly media operation’s qualified research project into a company’s operational practices, effectively bypassing the First Amendment and sidestepping judicial review.95 This could give rise to a host of issues where, based on information derived from these qualified research projects, state officials use their police power to unconstitutionally coerce96 platform companies to remove certain speech, or certain users, to satisfy political ends.97 If PATA’s mandatory disclosures are a statutory front door into a social media platform’s editorial processes, its broad access requirement, under the aegis of the public interest, is an even more concerning backdoor.98
Given some of the uncertainty around PATA and the significant questions its passage would raise, Congress should seek less constitutionally intrusive avenues. One route might be to encourage public-private partnerships for the development of industry-wide standards that promote user safety — a concept that scholars like Newton Minow and Professor Martha Minow have expressed some support for.99 In a recent paper, they explain that “[t]hrough voluntary self-regulation . . . private industry-level organizations create rules and standards with which individual industry actors voluntarily comply.”100
Successful examples of public-private regulatory efforts abound. The financial industry uses a third-party organization to “promote transparency and compliance with ethical standards devised through its own rulemaking process” in coordination with the Securities and Exchange Commission.101 The FTC has also facilitated public-private self-regulation efforts for marketing in the alcohol industry102 and coordinated with the movie, gaming, and music industries to align their ratings systems on definitions “for movies of G, PG, PG-13, and R; [as well as] the label of ‘Mature’ rating for games; and the label of ‘Explicit’ for music.”103 Platform companies have taken on self-regulatory efforts in other parts of the world, with companies like Meta and TikTok voluntarily signing on to the European Commission’s Code of Practice on Disinformation.104 The Minows’ pragmatic approach urges lawmakers and platforms to pursue collaborative self-regulation, because even though it “is likely to advance the interests of the companies and benefit incumbents over new entrants, . . . it also can draw on the knowledge, resources, and flexibility of the private companies”105 in a way similar to the benefits gained from collaborations with the alcohol and entertainment industries.106 Together, these frameworks, along with the rise of independent, third-party organizations with expertise in the space and a commitment to tech accountability,107 can chart a more collaborative path forward to solving the challenges raised by social media regulation’s status quo.
If sunlight is the best disinfectant, PATA’s “electric light [may be] the most efficient policeman.”108 The law’s promise to provide academic researchers with transparency into the algorithmically curated environments that social media platforms have built illuminates a path toward tech accountability.109 And given the impact these companies have on our day-to-day lives, as well as the fact that they are implicated in public controversies like mass shootings, eating disorders, suicides, and countless other social ailments, it’s clear that these companies cannot and should not regulate themselves without oversight. But the cure cannot be worse than the disease. And while sunlight might be the best disinfectant, when the government shines that light on constitutional rights, it should be met with deep skepticism. Broad access to the platforms’ data could easily lead to chilling effects under the government’s (supervised) watch. Congress should focus its legislative power on encouraging self-regulated, industry-wide standards that promote user safety, transparency, and accountability.