There is a lot to unpack in the recent #DontFeedTheTrolls campaign, which is, with many caveats, largely common sense. Launched by new charity Center for Countering Digital Hate, stars lined up to show their support, some of whom will hopefully start following the suggestion more closely themselves.
As a phrase, ‘don’t feel the trolls’ has been in use since the early days of internet forums, where the word ‘troll’ came to mean someone sometimes malevolent, but always time-wasting, users who got their kicks from bothering others. They weren’t always nasty. Sometimes they were the jesters of a group, deploying practical jokes and making mischief. This kind were an early example of what’s now known as ‘s***posting’, the creation and sharing of content that is, by design, pointless and insincere, underpinned by a nihilistic view about the state of current affairs, and the sense that why not fritter away time being surreal and annoying to public figures online, if society is in meltdown anyway. Dictionary.com defines the term as “to post off-topic, false, or offensive contributions to an online forum with the intent to derail the discussion or provoke other participants”.
But at their worst, trolls were abusive, often racist, homophobic, sexist and threatening, and they proliferated in online spaces dominated by young men banding together and hostile to others, where it was possible to use anonymity with sinister intent, encouraged by the toxic bravado that set the tone of belonging in so many humour and gaming forums.
When online abuse began to be discussed in the mainstream as a result of social media usage becoming widespread, there was a conflation of ‘trolling’ with ‘abuse’ that wasn’t especially helpful to understanding either, because while trolling could encapsulate abuse, the word minimised and was inaccurate about what was happening. Traditional media has often been slow off the mark to understand and analyse online behaviour.
It’s easy to brush off abuse as just some online thing when it’s called trolling, and that propagates the idea that online threats have minimal inpact when the reality is not only can they be as law-breaking as threats and intimidation in the offline world, but studies have shown they have a diminishing effect on demographics already under-represented in public discourse.
Amnesty International has been looking into the impact of online abuse, specifically aimed at women, and in 2017 released a study conducted in eight countries showing that almost a quarter of women online had experienced online abuse or harassment, and that 41 per cent of those felt their physical safety was threatened as a result of online abuse. The impact has slowly begun to be understood as a freedom of speech issue, with abuse leading to self-censorship and stress, preventing women from expressing themselves freely and joining in public life.
Shockingly, Diane Abbot alone received 45 per cent of abusive tweets sent to MPs in the weeks prior to the 2017 election. Similarly, when the Guardian took the interesting and commendable step of analysing comments, they found that women and black writers received a disproportionate amount of abuse. There is a school of thought that the internet, and particularly social media, is the cause of abuse. Certainly, there are many exacerbating factors. But social media also makes existing thoughts and tendencies visible, and what’s on view here is racism and sexism that already exists in individuals, expressed in violent and threatening ways.
In her recent book of essays, Trick Mirror, a brilliant and excavating look at contemporary culture, there is a section where author Jia Tolentino examines trolling and the “mutual dependency” that can encourage it. Arguing online can be addictive and cathartic. Who doesn’t want to swear on Twitter sometimes? But while blocking can feel like doing nothing, it’s still taking an action, and a more decisive one than engaging with and providing attention to those who seek it.
For a very small minority of users who are highly visible, engaging with ‘trolls’ also becomes part of their personal brand. Let’s be clear – this is not about taking a stand in a useful or important way, but performatively squabbling with anonymous or clearly malevolent accounts, sometimes to depict themselves as at the forefront of a social movement. It’s also frustrating to see an easy supply of responses to career controversialists like Katie Hopkins and Piers Morgan, who dangle bait and always get a bite. Common sense seems in short supply in these instances.
But where is corporate responsibility in all this? Blocking and muting is good advice, but it’s also treating the symptom of a larger disease. We’re encouraged by social sites to put more and more of ourselves online under the guise of making connections and networking. Social media has been an important tool for people under-represented in mainstream public expression, and has helped communities and movements bolster one another and flourish. But just as capitalism encourages us to create personal brands and turn ourselves into walking advertisements in increasingly insidious ways, the ultimate goal of social media sites is to profit from their users. The negligence over cracking down on content which is abusive and prejudiced cannot be overlooked. When Amnesty International began to examine the impact of online abuse on freedom of expression, they discovered much of it was against the terms and conditions of social sites which let it flourish unchecked. Facebook has infamously banned images of breast-feeding, including any nipple shots on its sister site Instagram, but far-right propaganda and hate speech run rampant.
Blocking and muting, in order not to feed the trolls, is for the most part a sensible coping strategy for individuals. But placing the onus on social media users to solve the problem sites have failed to, which disproportionately impacts women and racial and sexual minorities, lets social behemoths who make staggering profits from our data off the hook. Perhaps it’s time for legal requirement for social media sites to enforce their own community standards and deal with hate speech adequately, at risk of financial penalty.