By: Elizabeth Levin, JD ‘20

In the wake of the 2016 election, scholars, regulators, and private companies were faced with the question of the role of social media in preventing the spread of various forms of misinformation. Media outlets spread news of the rise of “fake news,” and several studies confirmed the role of social media platforms in its spread and influence. The wave of information on the spread of fake news led to a call to arms for social media platforms to counter misinformation and act in ways that were socially responsible. Although not all commentators were as optimistic about platforms’ potential success in tackling fake news and misinformation campaigns, many argued that Facebook had a responsibility to protect its users against fake information. Mark Zuckerberg’s statement before Congress––arguing that Facebook was a technology company, not a media company, and therefore not responsible for regulating news on its platform––was met with backlash.

The proper role of online platforms in combating the spread of misinformation and combating of electioneering efforts remains uncertain, but looming in the background of this assessment is the possibility that social media companies could themselves engage in election interference. In 2014, Jonathan Zittrain posited a hypothetical known as “digital gerrymandering.” Facebook executives (perhaps under the direction of its largest shareholder, Mark Zuckerberg) could, at least in theory, use users’ publicly shared party preference, as well as Facebook’s ability to predict partisan affiliation, in order to selectively send a voter encouragement message only to members of the political party they personally supported. Under this hypothetical, the tilt is in favor of the directors’ or shareholder’s personally preferred party; alternatively, the directors might support whichever party is more favorable to corporations in general or against regulation of Facebook Inc.’s operations, as an act under their “fiduciary responsibility” to Facebook’s shareholders. Facebook has previously demonstrated its ability to influence voter turnout: in a study surrounding the 2010 election, it sent a “Go Vote” reminder to 60.1 million of its users, prompting an additional 340,000 people to vote.

That Facebook or other platforms might have the power to meaningfully change voter behavior and influence the results of an election result may not come as a surprise––even a facially neutral get-out-the-vote campaign can have politically skewed results––but the potential effect of targeted influence campaigns is significant. Even though, several years later, the actual impact of fake news on the 2016 has been questioned, its potential effect on the democratic legitimacy of our elections has continued to fuel efforts to combat it. If potential impact on democratic legitimacy is the measure, the power of the platform should be of even greater concern. Scholars have demonstrated how online platforms have the power to influence opinion far beyond the power of fake news through the ranking of search results or the positioning of items on a feed. Facebook itself has touted its ability to sway an election through custom ads. Robert Epstein, a senior research psychologist at the American Institute for Behavioral Research and Technology and the leading researcher on the effects of platform websites on voting behavior, has commented that fake news is a far less significant problem than the dangers of big technology companies’ control of media and ability to take advantage of users’ bounded rationality in an effort to change election outcomes.

The power of big technology companies to influence their users is the first of its kind. This is in part because unlike other potentially biased media sources such as newspapers or television, “data-opolies” have emerged where a single company controls a major Internet sphere, be it searches (Google), social media (Facebook), or online shopping (Amazon). This control gives them a unique power to manipulate voters, both in altering voter preferences as a whole and in favoring particular candidates. Through platforms’ ability to monitor users’ activity and curate their experiences, they are in a position to manipulate behavior. Left unregulated, the ability of these “data-opolies” to influence their users will only continue to grow as network effects contribute to the value and popularity of major players and deter new entrants.

For social media companies’ involvement in the regulation of false content to actually help us, these companies must share the values we aim to preserve. Some scholars have expressed doubt that social media companies or any other “data-opolies”­­ will act towards any goal that isn’t profit-maximization. If fake or sensationalist articles maximize advertising revenue, they are not clearly inconsistent with this goal. Companies may argue that a reputation for spreading “fake news” would harm the company’s value, since users join it seeking meaningful engagement and may be bothered by useless or disruptive links. But without a meaningful accountability mechanism beyond public outcry, internal manipulation would be difficult both to identify and to halt.

This is not to discount all involvement by social media organizations. Although there has been evidence of continued efforts to interfere in U.S. politics, including interference in the 2018 midterm elections, Facebook’s efforts to control misinformation since 2016 seem to be having an impact. On the other hand, this success may be attributable to a change in the tactics used––rather than direct interference and the creation of more misleading content, previous hostile actors may simply be exploiting the societal divisions that have persisted since 2016. Regardless of the relative efficiency of current anti-fake news efforts, the initial unwillingness of platforms to self-police, as well as reports by the Senate intelligence committee stated that social media companies “may have misrepresented or evaded” claims of interference, should counsel against embracing the role of these companies as the best content regulators.

A common topic at the Yale Cyber Leadership Forum was how much self-governance companies should have in their ability to respond to cyber threats. Representatives from various private companies stated that they would prefer that the government gave them the opportunity to protect themselves against such threats without requiring government cooperation; others expressed that they wished the government take the responsibility out of the private companies’ hands. However, giving social media companies the primary responsibility of protecting against electioneering campaigns within their platforms may create an avenue for even more significant abuse. Given the existence of other alternatives to responding to misinformation campaigns, such as accreditation and regulatory interference, a rush to private governance ought to be avoided.