The use of social media to disseminate the harrowing footage of the terror attacks in New Zealand and the abhorrent ideology of its perpetrator reveals two things. Firstly, as has long been clear, Big Tech lacks not just the capability, but the will to police its own ecosystems. Secondly, and even more problematically, our hopeless response to this passive incompetence focuses on the symptons instead of the disease.
The atrocity in Christchurch was explicitly designed to exploit the daily diet of search-engine-optimised content that feeds our viral media culture – a hamster’s wheel powered by fury which spins ever faster, destined for nowhere. The slaughter, broadcast live on Facebook, was supplemented with a ghoulish white supremacist manifesto that had been uploaded to social media channels in the knowledge it would spread like a cancer.
It did. Of course it did. Why should performative mass murder be any different to unboxing or ASMR videos? In the clickable, sharable realm, it was planned as a content package. “Terrorism is theatre,” Brian Jenkins famously declared back in 1974. Nowadays, the acts are subdivided by pop-up ads.
The livestream – a tool previously used to simulcast child abuse, suicide, and murder – was allowed to play out in its entirety. It took a further 12 minutes before Facebook took it down, a course of action sparked not by its moderators, but New Zealand police. By then, the stream had been widely linked on 8chan, reposted on YouTube, narrated on Reddit, and mirrored throughout the nooks of the dark web. Facebook said it blocked 1.2 million videos of the attack at the point of upload. The flipside of that is it allowed aound 300,000 videos to be published. By any measure, that is not good enough. YouTube’s failure was even more pronounced. Its engineers ‘hashed’ the original video, meaning duplicates could be automatically deleted by its machine learning software. That was the theory. All that was required to bypass the system was to upload the footage in truncated snippets – a multiplatform horror film recast in episodic form.
It is worth stating that social media’s inability to expunge such horror is not the root cause of the attacks. As my colleague, Dani Garavelli, has pointed out, anti-Muslim narratives are not the preserve online fringes; they have been legitimised by the mainstream media to the extent that they risk becoming hegemonic.
Calling out the process by which such toxic ideas are normalised is crucial to ending the permissive atmosphere which encourages people to flirt with extremism. Indeed, it is telling that many of the predictably cynical responses to Christchurch have come from a right-wing media eager to deflect blame.
Fox News, which pipes a daily diet of paranoia, anger, and hairspray into millions of Amerian homes, chastised social media while downplaying the impact of Donald Trump’s vile rhetoric. This is the same broadcaster which days before, accused Ilhan Omar, a Democratic congresswoman, of violating the US constitution by wearing a hijab. But the focus on social media is important. It has never been more so, given how the questions being asked of Facebook, YouTube, Twitter are so sorely lacking in focus.
The comments made by Jeremy Corbyn, the Labour leader, were typical of the response from the political class: “Those that control and own social media platforms should deal with it straight away and stop these things being broadcast.” But the issues of moderation and enforcement are what social media want us to talk about. Why else would Facebook have taken the unprecedented step of sharing the video statistics?
No, what is broken is the very infrastructure of social media itself. Its chief actors have been playing – and losing – the same game of a whack-a-mole with extremist material and disinformation for years. When will we realise the gatekeepers’ failure is not down to a lack of expertise, but a surfeit of contempt?
In any case, the way the Christchurch footage was able to propagate at speed throughout a spectrum of other, less well-known platforms ought to highlight the ineffectiveness of calling for quicker, more aggressive content takedowns and account bans.
As Bill Braniff, director of the National Consortium for the Study of Terrorism and Responses to Terrorism, pointed out: “If you censor them and remove them from these platforms, you lose the ability to provide them with an off-ramp.”
What it required is a wholesale change of the ruinous algorithms driving social media traffic, systems designed with the commercial goal of keeping us online as long as possible by validating our interests and beliefs and encouraging us to seek out more of the same.
For someone intent on watching cat videos during their lunchbreak, the results are innocent enough. Yet what of the consequences when this systematic groupthink affirms and reinforces a collective sense of victimhood among those prepared to take drastic action to defend their worldview?
This invites the argument that, for all the flaws of its architecture and its unchecked power, social media itself does not cause people to take innocent lives. That is not only valid. It is true. But what if we ignore the old arguments and reframe the debate? What if we ask how big tech might provide succour to those at risk of radicalisation instead of expediting their descent?