By: Nikita Lalwani, JD ‘20

After months of denial following the 2016 election, Facebook appears finally to have grasped the magnitude of the threat of information warfare. In January, the company announced that it had deleted some 500 pages and accounts tied to disinformation campaigns originating in Russia. One of the campaigns—aimed at influencing people in Armenia, Azerbaijan, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Romania, Russia, Tajikistan, and Uzbekistan—included 289 pages that together had some 790,000 followers. As part of similar efforts, Facebook has also banned a digital marketing group in the Philippines, an online syndicate in Indonesia, and multiple pages, groups, and accounts in Iran.

In addressing the spread of disinformation, Facebook has sought, commendably, to clarify the nature of the threat. In a report published in April 2017, the company defined “information operations,” its preferred term, as “actions taken by organized actors ... to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.” It distinguished such operations from “misinformation” (false statements that are spread inadvertently) and “false news” (news articles that contain intentional misstatements of fact but not always for the purpose of distorting political opinion). Given the proliferation of terms used to describe the kinds of influence campaigns that prevailed in the run-up to the 2016 election, Facebook deserves praise for approaching the problem with definitional nuance.

Beyond definitions, however, the problem gets much thornier. Facebook has at least two choices for how it can weed out bad content: it can target the content itself, or it can target the means by which the content is disseminated. So far, Facebook has chosen to prioritize the latter approach, thwarting information operations by searching for a set of behaviors it associates with content manipulators: the mass creation of fake accounts; coordinated “likes,” comments, or sharing of content; or the “creation of groups or pages with the specific intent to spread sensationalistic or heavily biased news or headlines,” to name just a few.

This choice is not without its flaws. For one, it is technically difficult to implement in practice. Information manipulators are constantly developing new strategies and methods: the playbook they follow today will almost certainly be different from the one they follow six months from now, and that will be different from the one they follow six months after that. As Nathaniel Gleicher, Facebook’s head of cybersecurity policy, has conceded, “You can never solve a security problem. Threat actors will constantly find new ways to cause harm.” To target the mechanisms of content distribution effectively, Facebook must come up with a strategy both precise enough to identify currently active information operations and malleable enough to adapt to whatever new techniques are used down the line—hardly an easy ask.

Another drawback of Facebook’s current approach is that it can sometimes be difficult to draw a neat line between content and non-content considerations. How can Facebook determine whether a page has been created to spread biased news, for example, without first making some determination about what constitutes biased news in the first place? Making such determinations would, of course, open Facebook up to allegations of censorship and other free speech concerns. And avoiding such allegations was likely one of the reasons the company chose not to focus on content in the first place.

Yet despite its imperfections, Facebook’s current approach remains better than any of its other options—first, because it is easier to classify problematic behavior (the mass creation of fake accounts and pages, for example) than it is to clearly delineate problematic content; and second, because it allows Facebook to avoid, as much as possible, delicate issues surrounding freedom of speech.

As a private company, Facebook is of course not bound by the First Amendment, and, like all internet companies, it is protected from legal jeopardy for content posted on its platforms by Section 230 of the Communications Decency Act of 1996. Nevertheless, as more people join the platform and it comes more closely to approximate a public square, the company would do well to respect basic principles of free speech and expression, lest it silence meaningful debate on matters of public concern. When Facebook has explicitly banned speech—hate speech, for example—it has had trouble applying its standards fairly and uniformly. As an investigation by the journalism organization  ProPublica put it, Facebook’s deletion of hate speech “amounts to what may well be the most far-reaching global censorship operation in history” as well as “the least accountable.”

By focusing on the mechanisms by which state or non-state actors disseminate disinformation, rather than on the content itself, Facebook can do its best to sidestep such concerns. When it comes to the government, the First Amendment subjects content-based regulations to strict scrutiny. As the Supreme Court held in Reed v. Town of Gilbert, Arizona, “[g]overnment regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed,” an inquiry that requires the Court to consider “whether a regulation of speech ‘on its face’ draws distinctions based on the message a speaker conveys.” But in the past, under the secondary effects doctrine, the Court has allowed facially unconstitutional content-based regulation to pass judicial scrutiny—so long as the regulation appears to target not the content itself but one of its side effects. The Court once upheld a regulation of adult movie theatres, for example, on the grounds that it was meant not to suppress the content of the movies but rather to address the increased crime and decreased property values in the areas around the theatres.

Even though it is not bound by the First Amendment, Facebook is following something akin to the secondary effects doctrine in declining to focus on content in weeding out information operations. The analogy is not perfect—the ways that content manipulators disseminate information is less a side effect than a key part of what makes the speech so compelling—but it illuminates that what Facebook is doing is using facially non-content related regulations as a proxy to police content. By appearing not to be focused on content, Facebook can regulate speech without having to engage seriously with what makes speech bad, or misleading, or false—categorizations that are especially difficult given how often information operations blur the line between fact and opinion.

Ultimately, Facebook is right to take the threat of information operations seriously—and, if anything, should do more to counter content designed to manipulate and deceive. Yet in the end, many of the challenges Facebook faces regarding content moderation stem from its fundamental business model: advertising. To sell more ads, Facebook’s algorithms, like YouTube’s, encourage more “engaging” content—which extreme, inaccurate, and deliberately misleading content often is. Short of fundamental changes to its business model—changes that Zuckerberg appears to be seriously considering—Facebook’s current approach remains its best shot.