During the past decade, the problems involving information privacy – the ascendance of Big Data and fusion centers, the tsunami of data security breaches, the rise of Web 2.0, the growth of behavioral marketing, and the proliferation of tracking technologies – have become thornier. Policymakers have proposed and passed significant new regulation in the United States and abroad, yet the basic approach to protecting privacy has remained largely unchanged since the 1970s. Under the current approach, the law provides people with a set of rights to enable them to make decisions about how to manage their data. These rights consist primarily of rights to notice, access, and consent regarding the collection, use, and disclosure of personal data. The goal of this bundle of rights is to provide people with control over their personal data, and through this control people can decide for themselves how to weigh the costs and benefits of the collection, use, or disclosure of their information. I will refer to this approach to privacy regulation as “privacy self-management.”
Privacy self-management takes refuge in consent. It attempts to be neutral about substance – whether certain forms of collecting, using, or disclosing personal data are good or bad – and instead focuses on whether people consent to various privacy practices. Consent legitimizes nearly any form of collection, use, or disclosure of personal data.
Although privacy self-management is certainly a laudable and necessary component of any regulatory regime, I contend that it is being tasked with doing work beyond its capabilities. Privacy self-management does not provide people with meaningful control over their data. First, empirical and social science research demonstrates that there are severe cognitive problems that undermine privacy self-management. These cognitive problems impair individuals’ ability to make informed, rational choices about the costs and benefits of consenting to the collection, use, and disclosure of their personal data.
Second, and more troubling, even well-informed and rational individuals cannot appropriately self-manage their privacy due to several structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework.
In addition, privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically – not merely at the individual level. As several Articles in this Symposium demonstrate, privacy has an enormous social impact. Professor Neil Richards argues that privacy safeguards intellectual pursuits, and that there is a larger social value to ensuring robust and uninhibited reading, speaking, and exploration of ideas. Professor Julie Cohen argues that innovation depends upon privacy, which is increasingly under threat as Big Data mines information about individuals and as media-content providers track people’s consumption of ideas through technology. Moreover, in a number of cases, as Professor Lior Strahilevitz contends, privacy protection has distributive effects; it benefits some people and harms other people. Privacy thus does more than just protect individuals. It fosters a certain kind of society, since people’s decisions about their own privacy affect society, not just them-selves. Because individual decisions to consent to data collection, use, or disclosure might not collectively yield the most desirable social outcome, privacy self-management often fails to address these larger social values.
With each sign of failure of privacy self-management, however, the typical response by policy-makers, scholars, and others is to call for more and improved privacy self-management. In this Article, I argue that in order to advance, privacy law and policy must face the problems with privacy self-management and start forging a new direction.
Any solution must confront a complex dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, but the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data. Paternalism would be easy to justify if many uses of data had little benefit or were primarily detrimental to the individual or society. But many uses of data have benefits in addition to costs, and individuals could rationally reach opposite conclusions regarding whether the benefits outweigh the costs. Making the choice for individuals restrains their ability to consent. Thus, to the extent that legal solutions follow a path away from privacy self-management and toward paternalism, they are likely to limit consent. A way out of this dilemma remains elusive.
Until privacy law recognizes the true depth of the difficulties of privacy self-management and confronts the consent dilemma, privacy law will not be able to progress much further. In this Article, I will propose several ways privacy law can grapple with the consent dilemma and move beyond relying too heavily on privacy self-management.