As Elon Musk's Twitter and Mark Zuckerberg's Threads do battle, are such platforms really the right forums for public debate? – Laura Waddell

Hate speech, the gathering of personal information, and the trauma experienced by content moderators all raise questions about the suitability of social media giants for government communications and the mainstream media

This article contains affiliate links. We may earn a small commission on items purchased through this article, but that does not affect our editorial judgement.

As a much-needed competitor to Twitter, Threads has taken off quickly, with more than 100 million users already. Some are there speculatively. YouTube content creator Steven Bridges told the BBC: “Almost every influencer, whether they like it or not, or whether they want Threads to succeed or fail, will be hopping on Threads just in case it does succeed.”

There used to be a common joke that social media was what the intern did, the benefits of Facebook marketing being something the youthful new hire had to explain to aging bosses. This dismissive attitude has died out a bit now these sites have become such a core interface for public communications and marketing, adopted by publicity-savvy figures. Quite a few organisations established themselves online in that pre-presidential tweet era.

Hide Ad
Hide Ad

But did any of us really know what social media would become? Is Twitter really the best place for communicating essential information? Contrary to popular belief, it is not a public square. Placing so much communication infrastructure in the hands of private corporations, each with its own rationale for success and profit and ability to change at a whim, is an awful lot of power to hand over.

I have always been uneasy with how quickly social media platforms became official communication channels for institutions with a duty to inform the public. Sometimes, it means diminishment of information elsewhere. I hate looking up whether a bus service is running and being sent to trawl a social media feed for updates. A decade ago, when Twitter was being adopted by a slew of national and local governments and media organisations, popping up one after another like mushrooms, it took a while for many to find their feet or understand their rationale for having a presence there at all – other than that everyone else was doing it.

The BBC, in particular, has struggled to pin down impartiality for its presenters and what it deems appropriate “personal opinion”. Some get a harder time than others in the press: compare Gary Lineker’s tweets about refugees to Andrew Neil’s output as editor of the Spectator. But does anyone really have the answers? There is a sense that organisations have always been playing catch-up with their own social media presences, mistakes and gaffes made in real time, public response and outcry driving, rather than being encompassed by, a pre-existing comms policy.

As a society, we don’t seem to have remotely unpacked how social media channels, with their focus on interaction and engagement, inflect the tone and agenda of messages sent through them, or how a platform optimised for controversy might be degrading the quality of public information. Tweeting publicly has the potential to address everybody, all at once. But shouting amidst the cacophony is not necessarily an optimal way to communicate.

On his loudly publicised takeover of Twitter, Elon Musk made a number of statements about the platform’s content moderation that left onlookers uncertain as to what the policy would actually be. Was he sure himself? A writer in Vanity Fair described the site’s content moderation as “seemingly wholly dependent on what side of the bed Musk wakes up on or how users respond to random polls”.

But even spontaneous chief executives have laws to contend with. Last month, during a sit down with French channel France 2, Musk acknowledged the platform would comply with European Union content moderation rules, which means that, from late August, the Digital Services act, which classes Twitter as a very large platform, will enforce a clampdown on fake news.

Threads, which hasn’t yet launched in the EU, is already facing scrutiny over its own use of data collection. “I haven’t seen any evidence that Meta is being transparent about what it will do with sensitive personal data or is clearly establishing why it is collecting that data other than ‘because we want to,’” Calli Schroeder, of the non-profiit Electronic Privacy Information Centre (Epic), told the Guardian.

During Musk’s tenure on Twitter, content moderation staff have been cut as the focus inside the company was put on greater automation. Unsurprisingly, cutting moderation staff rather than bolstering them hasn’t had a positive effect, with the overall volume of hate speech doubling after Musk’s takeover. The New York Times reported that researchers “had never seen such a sharp increase in hate speech, problematic content and formerly banned accounts in such a short period on a mainstream social media platform”.

Hide Ad
Hide Ad

This rise of racism, anti-semitism, and homophobia sits awkwardly alongside media and government agencies using it as a key output channel. Vulnerable to the whims of its eccentric chief executive, as well as bot attacks and privacy leaks, a lot of reliance has been placed on the commercial platform by those who ultimately have no control over it.

So how will Meta, the parent company of Threads, home already to Facebook and Instagram, fare in this area? Their track record should be scrutinised. Content moderators have spoken out about the harrowing nature of the job; Zuckerberg was previously caught on tape saying he thinks claims of PTSD are “a little over-dramatic”.

These problems are worldwide. Last month, a judge in Nairobi ordered the company to “provide proper medical, psychiatric and psychological care” to workers suffering from post-traumatic stress as a result of what they’d seen in this line of work. One man recounted for the benefit of the court his first time witnessing manslaughter on a live video. Outside of the challenges of protecting users and their own workers from harmful and hateful content, Facebook has long been reported to have a negative impact on mental health, with Instagram in particular toxic for the self-esteem of teenage girls.

Is the answer to Twitter’s problems yet another app? I can’t help but feel those flocking to Threads as a replacement will be disappointed if they expect the experience to be substantially different. I learned my lesson the first time. I’ll be sitting Threads out.

Comments

 0 comments

Want to join the conversation? Please or to comment on this article.