There are many reasons people oppose government regulation of the various bits of software, hardware, and social glue that we call the internet. I write to respond to only one of these: the fear that regulation will have spillover effects and unintended consequences. Regardless of which side one takes in any number of debates about the regulation of the internet, one background view seems to be broadly held: law possesses the power to destroy the internet as we know it.
In communications law, net neutrality regulations are fearsome because they will kill investment in infrastructure.1 Modest proposals to limit online discrimination or online hate speech will scare away innovators and dry up venture capital.2 Copyright law will destroy everything good on the internet,3 and limitations on encryption will too.4
Apocalyptic predictions about the potential of regulation to kill the internet occur frequently in debates over proposals to protect privacy online. Critics warned that the modest self-regulatory effort to create a “Do Not Track” signal for the web would “kill the internet as we know it.”5 Opponents said something similar about a European measure to require consent for web tracking.6 Many people worried about the internet-wrecking potential of Europe’s modest implementation of the right to be forgotten in 2014’s Google Spain SL v. Costeja7 decision.8
These fears are unfounded. The internet is a resilient, self-healing system, thanks to the power of code.9 Software, as Professor Jonathan Zittrain points out, is a generative force unlike any other technology we have concocted to date.10 In Zittrain’s powerful telling, generativity is something we need to work proactively to protect, opposing efforts — whether by private actors or regulators — to turn our general computing machines into dumb appliances.11 My argument picks up where Zittrain’s leaves off, pointing out that this same generative power can act as an important check on the impact of regulation. An earlier observer of the internet, writing a decade before Zittrain, connected these dots between generativity and resilience in the face of regulation. In 1993, “[i]nternet pioneer” John Gilmore famously said: “The Net interprets censorship as damage and routes around it.”12 This early-internet brag seems difficult to square with the way many today treat the internet as fragile and susceptible to destruction through law. I believe Gilmore’s quote is as true today as it was when he first said it. The power of software to evade regulation is more than up to the challenge.
How does software “route around” a regulation? Constrained by a regulation that limits some deleterious aspect of a software system, a solo developer with a few hours can engineer her way around the new rule. A massive team of developers backed by a corporation with endless coffers can remake a global infrastructure to wend its way around a regulation.
Focus on a single step in the lifecycle of software development: recompilation, the conversion of human-readable source code into machine-executable object code. From the point of view of a regulator of code, during the short amount of time it takes to recompile software, everything can change. In that time, a programmer can reshape small worlds. In this way, software is nothing like the industrial processes it has begun to replace. To effect massive, structural, fundamental change to an operating code base, software developers need not erect new scaffolding, dismantle old structures, or create new blueprints — at least not in any literal sense. Coders use metaphors for every one of those industrial-era phrases,13 but we ought not be fooled by the metaphor. Coding is never easy, at least not at the industrial scale of today’s modern, multimillion-line codebases. But compared to the industrial processes it tends to replace, coding is far more efficient and far less onerous, in a strict change-per-effort ratio.
Consider, for example, the nearly perennial reports of the exhaustion of the 4.3 billion IP addresses specified by the internet’s earliest architects. Shortly after the birth of the web, the Internet Engineering Task Force (IETF) began planning for the eventual day when those four billion addresses would run out.14 But engineers devised techniques such as Network Address Translation15 and Classless Inter-Domain Routing,16 and some organizations returned gigantic allocations of address space.17 Today, the predictions of address exhaustion have quieted. Pundits predicted that email would die under the weight of spam.18 It didn’t, thanks in some measure to a law, the CAN-SPAM Act,19 but thanks also to people who changed their software with Sender Policy Framework,20 blacklists,21 and machine learning techniques.22 Some warned that the rise of YouTube and then Netflix meant that the internet would slow to a crawl.23 It didn’t, thanks to Content Delivery Networks24 and new contractual arrangements.25
But do these stories about the power to adapt to technological challenges continue to hold when the requirements come not from the market but instead from regulators? We can model a well-targeted regulation as a simple software requirement — a feature the software ought to have, or a behavior it ought not exhibit, or a challenge it ought to surmount. By abstracting away the fact that the requirement comes from a government body rather than a suggestion from a user or a technical hurdle, we can study the self-healing features of the internet that have occurred over the years.
Consider two examples of the resilience of code in the face of regulatory challenge. In the late twentieth century, the U.S. government sued Microsoft for trying to use its near monopoly in operating systems to unfairly harm competition in the new market for web browsers, by bundling Internet Explorer into Windows 95. Putting to the side the merits of the underlying antitrust case, focus on the clash over the remedy: mandated unbundling. Microsoft protested throughout the proceedings that it would be an expensive if not impossible task to pull out the scattered bits of code that comprised their browser. The government presented expert opinion to the contrary, pointing out that Microsoft could unbundle the browser with relative ease, if ordered to do so by the court.26 In the end, Microsoft agreed to offer later versions of its operating system without a bundled browser, providing at least circumstantial evidence that its earlier objections were overstated.27
Then consider an example from a still-simmering fight over privacy, the fight for an enforceable right to be forgotten in Europe. The European Court ordered Google to recognize this right, removing entries from its search engine that satisfied a standard under this right.28 Again, putting to the side the fundamental merits of such a right as well as the bureaucratic machinery necessary to adjudicate when the right should be recognized, consider only how little Google complained about the technological challenge. Granted, this is a far-from-straightforward story, as Google had already implemented code for deleting entries to comply with copyright takedown notices under the U.S. Digital Millennium Copyright Act (DMCA).29 It may be that Google simply repurposed that old code for this task. But even if Google had to implement its compliance mechanism from scratch, the code required likely would have been minimal. As importantly, given the generativity of code, Google could have implemented the change in a streamlined fashion. It could remove the content it was ordered to remove.
Software blunts the power of regulation, but it doesn’t inoculate itself entirely from law. Because people write software and because people are susceptible to things like subpoenas and prison cells, regulation can still be a terrible and powerful force. Gilmore’s “routing around” bromide implicitly acknowledged that internet regulation could have devastating local effects. The regulation will continue to operate on the people most directly within the sphere of the regulator’s power, and those people will be routed around too. To be routed around is to be isolated or left behind, to be excluded from the worldwide network-of-networks. If the regulation results in people being routed around because they are trafficking in caustic hate speech or invidious discrimination, this might be cause to celebrate. If instead the regulation squelches communications for oppressed people, we should worry. Local and direct effects matter a lot, and we should do everything we can to fight against internet regulation with harmful local effects. We still need to write sensible, targeted, and focused laws to fix the parts of the internet that are terrible.
This doesn’t mean we should stop debating the regulation of the internet and start enacting every idea any policymaker has, of course. To wade into very controversial waters, my argument means that SOPA30 and PIPA31 probably would not have blown up the internet as we know it.32 But I’m still very happy these very bad laws were defeated because they proposed very bad solutions to an exaggerated harm, resting atop really bad copyright policy, in a manner that was very badly implemented.33 The world and the internet are much better without these toxic laws. But the internet would’ve routed around those laws, too, just like it’s routed around other bad copyright laws in the past and will do so in the future.
Regulation can shape the internet, but it is not likely to kill it. If they are wisely designed, net neutrality laws will take a few narrow and innovation-dampening business arrangements off the table; antidiscrimination laws will decrease discrimination; and anti–hate speech measures will decrease hate speech. But the parts of the internet these laws do not directly touch will remain, as ever, generative, burbling founts of innovation and dynamism and economic growth.
The same goes for privacy law. Google responded to Costeja without catastrophic spillover effects or unintended technological consequences. The web survived the EU cookie directive and the FTC’s 2012 rules regulating the Children’s Online Privacy Protection Act of 199834 (COPPA). The internet will continue to thrive even if the FCC enacts a bold new privacy rule, and even once Europe starts enforcing the new General Data Protection Directive. If the local effects of these regulations are wise, and I happen to think they are in each of these cases, they will accomplish what they set out to accomplish, and the rest of the internet will route around the changes they effect.
Regulators should shake their reluctance about taking important actions to address serious problems leading to true harms if their reluctance is based on fears of breaking the internet. The internet’s essential character and software core make it a powerfully resilient, self-correcting machine. Regulation is still today seen as damage and routed around. Regulatory action will get cauterized, and healthy internet corpora will grow around the wound. Regulators need to move decisively and aggressively to restore their role in the evolution of the internet. This is powerful support for many privacy-related initiatives.
Let’s change the internet, for the better. Let’s encourage countries and states to serve as laboratories of change, testing theories for what might be a better internet or a worse internet. Let’s stop treating the internet like it’s a fragile figurine that we might break through rough handling. We couldn’t kill it if we tried.
* Professor of Law, Georgetown University Law Center. Thanks to James Grimmelmann and Alicia Solow-Niederman for their comments. Thanks to John Douglass for research assistance.