PRIVATE SQUARE

Facebook has a government-size censorship responsibility without the structure to handle it

Facebook connects a quarter of the world’s population, and also determines what they can say to each other.
Facebook connects a quarter of the world’s population, and also determines what they can say to each other.
Image: REUTERS/Robert Galbraith
By

With nearly 2 billion users, Facebook reaches nearly a quarter of the people on the planet. And while its broadcasting power can be used for promoting good causes and unleashing viral cat videos, it can also be used to distribute hateful and violent content. This has put Facebook in the uncomfortable position of making judgment calls about whether the millions of posts flagged by its users as objectionable each week should be allowed to stay, flagged to other users as disturbing, or removed completely. It’s an unprecedented responsibility at this scale.

“The range of issues is broad–from bullying and hate speech to terrorism and war crimes–and complex,” Monika Bickert, Facebook’s head of global policy management, recently wrote in an op-ed. To meet this challenge, she said, “our approach is to try to set policies that keep people safe and enable them to share freely.”

Once Facebook sets these rules, it relies on 4,000 human content moderators to apply them to individual flagged posts.

The job isn’t straightforward. According to a Guardian report based on thousands of pages of Facebook’s content moderator training materials, “Someone shoot Trump” should be permitted, but not the phrase “Let’s beat up fat kids.” Digitally created art showing sexual activity should be removed, but all handmade erotic art is fine. Videos showing abortions are also permitted—as long as they don’t feature nudity.

Guidelines like these illustrate the complexity of content regulation, which until social media came around, involved questions that, for the most part, only governments faced at scale. What constitutes dangerous speech? Should some people–such as the president–be treated differently when they make criticisms or threats, or hate speech (paywall)? When is it in the public interest to show obscenity or violence? Should nudity be permitted, and in what contexts?

Some of Facebook’s answers to these difficult questions mimic content regulation laws created by democratic governments. According to the Guardian, for instance, Facebook tolerates some violent content, unless it “gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design.” This is somewhat similar to how the US views violent content, which tends to be protected unless it incites immediate violence. (Many European countries, meanwhile, have laws that prohibit violent content or hate speech.)

But the process Facebook uses to create and apply these policies has little in common with democratic governments, which have long, often-transparent processes for creating new laws and courts that weigh each case with considerations that aren’t available to Facebook moderators. Facebook could improve its content moderation policies, some suggest, by also borrowing some of these ideas—related to process rather than policy—from democratic governments.

“The multiplication of guidelines,” says Agnès Callamard, the director of Global Freedom of Expression at Columbia University, “as well meaning and well written as they may be, cannot be the answer.”

How Facebook’s process differs from that of a democratic government

Time to a decision: Facebook relies on thousands of content moderators to make decisions about whether to remove, permit, or label specific content as “disturbing” based on its rules. To deal with the massive scale on Facebook, the company recently said it would hire 3,000 additional people to review posts. It has also invested in artificial intelligence that could reduce the amount of work for human moderators.

For now, according to one report, a typical Facebook content moderator makes a decision about a flagged piece of content about once every 10 seconds (a Facebook spokesperson declined to confirm or deny this number, saying she didn’t have the data). ”Context is so important,” Facebook’s Bickert told NPR last year. “It’s critical when we are looking to determine whether or not something is hate speech, or a credible threat of violence,” she said. “We look at how a specific person shared a specific post or word or photo to Facebook. So we’re looking to see why did this particular share happen on Facebook? Why did this particular post happen?” Those questions take time to evaluate effectively.

That’s one reason why in most democratic countries, Callamard says, content regulation by media regulators and the courts involve decisions that take days or weeks.

Debate: Content moderators on Facebook don’t hear arguments for why they should either permit or remove a piece of content. Users whose pages or accounts they remove do have an option to appeal the decision by submitting it for another review (Facebook recommends they remove the violating content first).

Government content regulators usually have more input from opposing sides. “[Decisions] will often involve a judicial process, including several parties arguing one side or the other [as well as] judges reviewing the various arguments and making a decision,” Callamard says.

Open discussion of rules: Facebook publishes broad guidelines for what it allows and disallows on its site, but, to keep users from gaming the system, the specifics are only shared in internal documents like the hundreds of training manuals, spreadsheets, and flowcharts that leaked to the Guardian.

A Facebook spokesperson says the company consults experts and local organizations to inform its community standards, but the public doesn’t know all of Facebook’s content moderation rules, nor is it part of creating them.

By contrast, Callamard says, in a democratic government, “the laws upon which these decisions are made have been discussed and debated in Parliament by members of Parliament; by government ministers and where they exist by regional inter-governmental bodies. These laws or decrees would have been the object of several readings, and in the best case scenarios, the general public (including those particularly concerned by the law, e.g. the media) would have been brought in a formal consultation process.”

Fundamental context: Governments have different goals than Facebook. In a democratic society, fundamental guiding principles include freedom of expression, freedom of political debate, and protecting content related to the public interest. At an advertising business like Facebook, success involves attracting and retaining users, many of whom don’t want to visit a website that shows them offensive or dangerous content. “This is a fundamental dimension of the way, in my opinion, Facebook always approaches content regulation,” Callamard says. “It cannot go so far and so as to undermine or weaken a business model based upon, and driven by data and more data (individuals’ data).”