Harvard Law Review Harvard Law Review Harvard Law Review

Harvard Law Review Forum

Engineering an Election

Digital gerrymandering poses a threat to democracy

Marvin Ammori has offered a nuanced and persuasive paean to the lawyers of such firms as Twitter and Google, who helped to translate the values of free speech into the online environment during a decade when both the applicable legal principles and technologies were new and evolving.1 The work of these firms to defend open expression has dovetailed with their function of pairing information producers and consumers. But what happens should the interests of the firms and their customers diverge?

Consider a hypothetical involving a phenomenon I call “digital gerrymandering,” grounded in a real, and fascinating, empirical study facilitated through Facebook. In late 2010, Facebook enjoyed about one hundred million visitors a day from North America.2 On November 2nd, sixty million of those visitors were subject to an ambitious experiment: could Facebook get them to cast a vote in that day’s U.S. congressional mid-term elections when they wouldn’t otherwise have gone to the polls?3

The answer was yes.

Many of those millions of Facebook users were shown a graphic within their news feeds with a link to find their polling place, a button to click to say that they’d voted, and the profile pictures of up to six of their friends who had indicated they’d already voted. Other users weren’t shown the graphic. Then, in an awesome feat of data-crunching, the researchers cross-referenced everyone’s name with the day’s actual voting records from precincts across the country. That way they could see if, on average, someone in the group receiving the voting prompt was more likely to mark a ballot than someone with an untouched news feed.

Users who were prompted with news of their friends’ voting turned out to be 0.39% more likely to vote than their undisturbed peers. And their decisions to vote appeared to ripple to the behavior of their close Facebook friends, even if those friends hadn’t gotten Facebook’s original message. To be sure, the impact is a modest increase: less than one-half of one percent. But with large numbers, that can mean a lot of people. The researchers concluded that their single message on Facebook, strategically delivered, increased turnout directly by 60,000 voters, and thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast amidst the 82 million Americans who voted that day.4 And as they point out, President Bush took Florida and thus clinched victory in the 2000 election by 537 votes — fewer than 0.01% of the votes cast in that state. The study’s results are fascinating, and credit is due to Facebook for making it possible.

With the results in mind, consider a hypothetical hotly contested future election — maybe one that includes a close referendum, too. Suppose Mark Zuckerberg personally favors whichever candidate you don’t like, and whichever answer you think is wrong to the referendum question. He arranges for the same encouragement to vote to appear within the news feeds of tens of millions of active daily users5 — but unlike 2010’s experiment, he creates a very special group that won’t receive the message. He knows that Facebook “likes” and other behaviors can easily predict political views and party affiliation — even beyond those many users who already proudly advertise those affiliations directly.6 So our hypothetical Zuck simply chooses not to spice the feeds of those unsympathetic to his views with a voter encouragement message. Given the research showing that feeds can directly impact voting behavior by hundreds of thousands of votes nationwide, it’s plausible that selective morsels in news feeds could alter the outcome of our hypothetical election. If it did, should we have a problem with that? If we do, should the law somehow constrain this kind of behavior?

This hypothetical is an example of digital gerrymandering: the selective presentation of information by an intermediary to meet its agenda rather than to serve its users. It is possible on any service that personalizes what it presents, particularly when there is an abundance of items to offer up and only a few that can be shown at a time. Whether for search results, top tweets, or Facebook updates, each curating service uses a proprietary recipe to draw from boundless ingredients to prepare our feeds, and each is in a position to gerrymander its users, distinguishing between those it thinks will be supportive of its goals and those who will not. None promises neutrality, whatever that might mean, although some — including Google and Bing — clearly distinguish between results driven by the companies’ own formulas versus those displayed as sponsored advertising.

So what if part of the secret sauce behind feeds serves the aims of the cook rather than the customer? People and corporations in positions of wealth and power aren’t the only ones who expend energy to affect the outcome of an election. Any player can try to buy ads, donate to sympathetic groups, or go door to door. Silicon Valley companies are no exception, and on some occasions they have gone beyond generic politicking to use their distinct platforms to promote their causes. For example, Google made its home page “doodle” a blacked-out version of its logo in January of 2012 to protest the pending Stop Online Piracy Act, said by its opponents (myself among them7) to facilitate censorship.8 The logo linked to an official blog entry importuning Google users to petition Congress, and no one called foul.9 Would Facebook selectively tweaking its Facebook news feed be any different?

One difference might center around disclosure. I’d feel betrayed to find out that a company that purports to be a conduit to help me find others’ content turned out to be shaping my experience according to its political agenda. There are all sorts of factors going into what Facebook presents in a feed, or Google and Bing place in search results, but our expectation is that they’re either to help prioritize what we most want to see — that elusive “relevance” — or to serve advertisers in ways that are to be labeled so we know a sponsored link from a regular one. But in our get-out-the-vote hypothetical, the feed item is essentially a house ad Facebook procured for itself — it’s a link clearly sourced by the company and labeled as such for the users who receive it. How can depriving me of that sponsored link, if I’m in the out group, be hurting me?

Disclosure of an absence rather than a presence wouldn’t be that meaningful: it would either be a general statement that Facebook reserves the right to season our news feeds however it likes, or a specific statement that one isn’t being urged to vote — news of which could in turn inspire that person to vote — and that could cascade into the odd territory of having to disclose all sorts of absences on a page. It’s easier to say why something is there than to canvass and explain the manifold things that aren’t.

Perhaps the Facebook hypothetical can be simplified to one involving the Google doodle. If Google, on an American election day, used its home page for a get-out-the-vote doodle not based on geography, but by perceived political affiliation, I’d cry foul. To be clear, just as Facebook has not been said to be gerrymandering the vote, neither has Google. Though the mechanisms are in place: a Google doodle can be shown to some users and not others. For example, in February of 2014, only Google users in Austria, Germany, and Switzerland were treated to a tribute to Gabriele Münter’s 137th birthday.10 (Presumably others around the world would have reacted with confusion rather than pride had they seen the doodle.)

The fail-safe against digital gerrymandering rather than more general audience segmentation for relevance is that Google would quickly become the story — and likely suffer at least a little bit as offended users jumped ship. But news feeds and search results are much more subtle, and wrongs there could be hazarded without the same backlash. There is one google.com home page that might vary a little from one person to another — in its doodle — while there is no “baseline” Facebook news feed, but rather infinite combinations of news items that differ from one user to another.

The wrong in digital gerrymandering is perhaps not a wrong to a given individual user, but rather to everyone, even non-users. It represents an abuse of a powerful platform. And our Facebook hypothetical is simply one point on an emerging map.11 Governments could mine such things as privately-held (if fully anonymized) friend graphs from social networks, or heart rates from fitness trackers, or spikes in phone call data from a certain area, to determine where dissent or unrest will next appear. Whether it’s governments anticipating public pressure from private data or a company like Facebook quietly tilting an election, no individual’s privacy is invaded. But collective rights are challenged: the right of people as a whole to associate with one another, in the case of pre-empted dissenting gatherings, or to enjoy the benefits of a democratic process, in the case of a quietly engineered election.

To meet worries about government overreach on private data, law could always set boundaries; the only catch would be getting such restrictions enacted. But using the law to meddle with a company’s presentation of information to its users — especially when no one is claiming the information that makes it into a feed is false — is asking for trouble. Professors Oren Bracha and Frank Pasquale have raised an alarm about what they see as the manipulation of search results by the handful of search engines with substantial followings. They’ve called for regulation — or at least a discussion about regulation, provocatively asking (but not exactly answering) whether there should be a “Federal Search Commission.”12 At the same time, they realize the problems such regulation could cause, not least of which in the United States is a legitimate First Amendment right of curators generally to present content as they please.

Professor Jack Balkin has avoided a call for something as stark as a Federal Search Commission while reinforcing the intriguing concept of information fiduciaries. He points out that lawyers and doctors get lots of information from their clients and patients and are obliged not to use that information against them. Balkin asks: “Should we treat certain online businesses, because of their importance to people’s lives, and the degree of trust and confidence that people inevitably must place in these businesses, in the same way that we treat certain professional and other fiduciary relationships?”13 Perhaps companies could choose whether to become information fiduciaries, the way that businesspeople who make suggestions on buying and selling stocks and bonds can elect between careers as investment advisors or brokers. Investment advisors owe duties not to put their own interests above those of their clients.14 Brokers have no such duty, even as they, confusingly, can go by such titles as “wealth manager, wealth advisor, investment consultant, financial advisor, financial consultant and registered representative.”15 (If someone’s telling you what to buy and sell in your nest egg, you might ask flat out whether he or she is your fiduciary and walk slowly to the exit if the answer is no.) Of course, while the clients of financial advisors may vary in their risk tolerance, to a person they have a simple and quantifiable goal: to increase their wealth. The offerings of information intermediaries are much more difficult even for a beneficiary user to judge; what counts as the perfect news feed? As a start, we can only say what makes for a tainted one: when the political or ideological preferences of the intermediary are subtly elevated above those of the user. Falsehood alone is no longer the central issue. As Dan Geer has observed: “When the amount of information is so great, so transparent, so pervasive, you can use absolutely nothing but proven facts and still engage in pure propaganda, pure herding.”16

Balkin has suggested that those who run virtual worlds — remember virtual worlds? — could choose among different duties to those who will inhabit them, and then simply be required to stick to them.17 So perhaps the duties of information fiduciary could be light enough to the Facebooks of the world, and meaningful enough to their users, that those intermediaries could be induced to opt into them. The government could offer tax breaks, or immunities from certain kinds of lawsuits, for those willing to step up toward an enhanced duty to their users. Historically, the granting of exclusive licenses to television and radio broadcasters to use the public’s airwaves came with the sometime burdens of even-handedness (and non-subliminal messaging).18 Here, a commitment to the public interest could be gleaned in exchange for commensurately lower taxes — somewhere between for-profit and non-profit.

However agreed, a central responsibility of an information intermediary would be to serve up others’ data in ways not designed to further the political goals of the intermediary. My search results and news feed might end up different depending on my political leanings, but only thanks to an algorithm trying to help me, the way that an investment advisor may recommend stocks to the reckless, and bonds to the sedate.

If we can’t trust the intermediaries who not only bring us our viral videos but our news, our daily cries, and our calls to action, we enter a territory of power that’s unfamiliar and unfair. That the current information landscape has its distortions — many inveigh against the legalized corruption of campaign financing19 — is no reason to entertain the idea of adulterations from new quarters. As the calling of journalism as a profession wanes, and more and more of what shapes our views comes from inscrutable artificial intelligence–driven processes, the worst case scenarios we might agree upon should be placed off limits in ways that don’t cause a cascade of other problems.

Our information intermediaries can keep their sauces secret, inevitably advantaging some sources of content while disadvantaging others,20 while still agreeing that some ingredients are poison — and must be off the table.


* Bemis Professor of Law at Harvard Law School and the Harvard Kennedy School, and Professor of Computer Science at the Harvard School of Engineering and Applied Sciences. I thank Jack Balkin, Kate Darling, Dan Geer, Greg Leppert, Laura Neuhaus, Alicia Solow-Niederman, Elizabeth Stark, and Jordi Weinstock for thoughtful comments, and Ben Sobel and Shailin Thomas for excellent research assistance.

1. Marvin Ammori, The “New” New York Times: Free Speech Lawyering in the Age of Google and Twitter, 127 Harv. L. Rev. 2259 (2014).

2. Quarterly Earnings Slides Q2 2012, Facebook, Inc., http://files.shareholder.com/downloads/AMDA-NJ5DZ/2978350963x0x586306/e480cc06-89ee-4702-87db-f91bc01beef8/FB_Q2_Investor_Deck.pdf (last visited April 9, 2014), archived at http://perma.cc/W855-VYSW.

3. Robert M. Bond et al., A 61-Million-Person Experiment in Social Influence and Political Mobilization, 489 Nature 7415 (2012), available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3834737, archived at http://perma.cc/6YVS-8QWH.

4. Michael Tomasky, Turnout: Explains a Lot, The Guardian (Nov. 3, 2010, 7:18 AM), http://www.theguardian.com/commentisfree/michaeltomasky/2010/nov/03/us-midterm-elections-2010-turnout-says-a-lot, archived at http://perma.cc/8SVU-3RDC.

5. As of 2014, Facebook claims 150 million active daily users from the United States and Canada. Newsroom, Facebook, http://newsroom.fb.com/Key-Facts, archived at http://perma.cc/X2DL-2G6B.

6. See Michal Kosinski et al., Private Traits and Attributes Are Predictable from Digital Records of Human Behavior, 110 Proc. Nat’l Acad. Sci. 5802 (2013), archived at http://perma.cc/X3MU-76MC.

7. The Colbert Report: Stop Online Piracy Act–Danny Goldberg & Jonathan Zittrain (Comedy Central television broadcast Dec. 2, 2011), available at http://thecolbertreport.cc.com/videos/nmrgz9/stop-online-piracy-act—danny-goldberg—jonathan-zittrain.

8. SOPA/PIPA, Google (Jan. 12, 2012), http://www.google.com/doodles/sopa-pipa, archived at http://perma.cc/WUD7-R9UM.

9. Don’t Censor the Web, Google Official Blog (Jan. 17, 2012), http://googleblog.blogspot.com/2012/01/dont-censor-web.html, archived at http://perma.cc/4UM8-H3EZ.

10. Gabriele Münter’s 137th Birthday, Google (Feb. 19, 2014), http://www.google.com/doodles/gabriele-munters-137th-birthday, archived at http://perma.cc/8X49-9YPE.

11. Frank Pasquale has noted the increasingly prevalent practice of companies shrouding their inner workings in secrecy, and how firms are leveraging this operational opacity to tip both the economic and regulatory scales in their favor. See Frank Pasquale, Restoring Transparency to Automated Authority, 9 J. Telecomm. & High Tech. L. 235 (2011).

12. Oren Bracha & Frank Pasquale, Federal Search Commission? Access, Fairness, and Accountability in the Law of Search, 93 Cornell L. Rev. 1149 (2008).

13. Jack Balkin, Information Fiduciaries in the Digital Age, Balkanization (Mar. 5, 2014), http://balkin.blogspot.com/2014/03/information-fiduciaries-in-digital-age.html, archived at http://perma.cc/A6U3-4UEV.

14. Financial Advisor, Wikipedia, http://en.wikipedia.org/wiki/Financial_adviser#Fiduciary_standard, archived at http://perma.cc/V5UH-WG26 (last visited Mar. 24, 2013).

15. Ethan S. Braid, Is My Financial Advisor a Fiduciary or a Stockbroker?, Highpass Asset Management (Mar. 2013), http://www.highpassasset.com/blog/58-is-my-financial-advisor-a-fiduciary-or-a-stockbroker.html, archived at http://perma.cc/RY4D-LBLN.

16. Email from Dan Geer, Chief Information Security Office, In-Q-Tel, to author (Mar. 10, 2014) (on file with author).

17. Jack M. Balkin, Virtual Liberty: Freedom to Design and Freedom to Play in Virtual Worlds, 90 Va. L. Rev. 2043, 2090–98 (2004).

18. See generally Red Lion Broad. Co. v. FCC, 395 U.S. 367 (1969).

19. Lawrence Lessig, We the People, and the Republic We Must Reclaim, Ted Conferences (Feb. 2013), http://www.ted.com/talks/lawrence_lessig_we_the_people_and_the_republic_we_must_reclaim.html.

20. Nicholas Carlson, The Hard Truth About How the Facebook News Feed Works Now, Business Insider (Feb. 13, 2014, 11:32 AM), http://www.businessinsider.com/how-the-facebook-news-feed-works-2014-2, archived at http://perma.cc/7XCJ-ER5K.