Martyn McLaughlin: Facebook must lift the veil over its censorship policies

The social media firm's culture has not kept pace with its explosive global growth, writes Martyn McLaughlin
Mark Zuckerberg, Facebooks CEO and founder, has been reluctant to address the issue of censorship on the social media network. Picture: Facebook/PAMark Zuckerberg, Facebooks CEO and founder, has been reluctant to address the issue of censorship on the social media network. Picture: Facebook/PA
Mark Zuckerberg, Facebooks CEO and founder, has been reluctant to address the issue of censorship on the social media network. Picture: Facebook/PA

For a company which helps shape how a quarter of the world’s population views the world, Facebook has been notoriously reluctant to accept the responsibilities that comes with its immense power.

What began as the plaything of an undergraduate hobbyist now commands close to two billion global users. Since it was founded in 2004, its mission statement has been succinct: to make the world a more open and connected place. With 510,000 comments and 136,000 photographs being uploaded to the site every 60 seconds, its reach is formidable, yet the practicalities of meeting the first component of its simple goal are proving increasingly fraught.

Hide Ad
Hide Ad

The release of documentation detailing Facebook’s rules on graphic content is evidence of a company caught between two stools. The sheer range of guidelines – spanning issues such as violence, hate speech, terrorism, racism, pornography, self-harm, cannibalism, and match-fixing – is evidence of a company desperate to appease its critics.

But the fact it took a leak to disclose the sprawling library of self-regulation shows it lacks the commitment and candour required to strike the balance between quelling dangerous material and ensuring freedom of speech is not compromised.

Even more worryingly, its definition of what passes for an offensive communication appears to be arbitrary. The leaked rules, issued to Facebook’s growing ranks of moderators, are an exercise in contradiction and confusion.

In many cases, what is classified as permissible seems at best ill-judged. While direct threats of violence against heads of state, such as “Someone should shoot [President] Trump,” are viewed as unacceptable, menacing remarks against women are greenlit.

One disturbing example contained in the documents, published by the Guardian, states: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat.” Another status update given the moderators’ approving green tick warns: “Little girl needs to keep to herself before daddy breaks her face.”

The trove of guidelines state that the company draws the line at content that could “credibly cause real world harm” and that it requires “certain details to be present” in order to consider a threat credible. These include remarks which target the likes of politicians, witnesses, informants, even if they are vague and – in the case of the Trump example – implore others to carry out the deed.

By contrast, the precise instructions of how best to break a woman’s neck are not the “certain details” Facebook requires in order to raise a red flag. It is, in other words, OK to be a violent misogynist, provided you are bumblingly vague about the identity of the woman you intend to harm. This is an inexcusable stance.

The release of the moderation rulebook comes after a sustained and influential effort by several investigative journalists, Alexi Mostrous in particular, who have shed light on the firm’s seeming unwillingness to remove potentially illegal material from its site, despite the fact moderators have been made aware of it.

Hide Ad
Hide Ad

The images and videos stem from a dark and dangerous place, encompassing footage of an Islamic State beheading, and an apparent sexual assault on a child, through to propaganda posters celebrating terrorist attacks on London, and violent paedophilic cartoons.

Such abominable material should not and can not be allowed to spread, and the perception that the world’s largest social media network is assisting, if not encouraging, their distribution, ought to put it at risk of prosecution.

The staggering volume of status updates from people across various countries and cultures means Facebook will not always get it right. Indeed, the Open Rights Group, which campaigns against government and corporate threats to digital rights, has expressed a degree of sympathy with Facebook’s ranks of moderators, pointing out that making decisions on what is or is not acceptable is “complex and fraught with difficulty”.

That is true, but it is time the company accepted it must do more. Facebook, for all that it might protest otherwise, is a media company, not a technology company. It has aggregated an unprecedented degree of autonomy and editorial power and now it heavily promotes its Facebook Live service, which allows for instantaneous video broadcasting. This is no mere intermediary; it is the first site many people turn to in the morning, and the last they check at night. It has a duty to not only properly moderate content, but make explicit the rationale by which it does so.

Given the tremendous data-scraping capabilities of its artificial intelligence-led algorithm, there are few, if any, technical limitations which would prevent Facebook publishing details of content and accounts it has removed.

The main obstacle to progress is that the fact the company still clings stubbornly to the hubris and secrecy that defines Silicon Valley company culture.

Continuing to do so risks losing the public’s trust, a development that would have major commercial repercussions.

There is an an important conversation to be had about how Facebook regulates content, and how comfortable we are at allowing a private company to determine what is and what isn’t appropriate. However, those debates will only be meaningful if the firm accepts that transparency and greater accountability are the necessary starting points.

Related topics: