Theresa May’s plans to tighten regulation on the internet to combat extremism have been branded “intellectually lazy” amid claims they fail to fully address the problem.
The Prime Minister accused big internet companies of giving terrorist ideology “the safe space it needs to breed” online, the latest in a series of attacks on tech firms by senior Conservatives.
Digital campaigners the Open Rights Group said it was disappointing the Prime Minister had focussed on regulation of the internet and encryption in the aftermath of the London Bridge attack.
The group said: “This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.
“But we should not be distracted: the internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused.
“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming.”
Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, was also critical of Mrs May’s speech.
He wrote on Twitter: “Big social media platforms have cracked down on jihadist accounts, with result that most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram.
“This has not solved problem, just made it different.
“Moreover, few people radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.
“In other words, May’s statement may have sounded strong but contained very little that is actionable, different, or new.”
The Tory manifesto for the General Election called for a much tougher approach to regulation on the internet.
It outlined measures to push internet companies further on their commitment to identify and remove terrorist propaganda, and stop terrorists communicating online.
Simon Milner, director of policy at Facebook, said the platform wanted to be “a hostile environment for terrorists” and would continue to work with international partners to tackle the problem.
He told the BBC: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it - and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.”
A Google spokesman told ITV: “We are committed to working in partnership with the Government and NGOs to tackle these challenging and complex problems, and share the Government’s commitment to ensuring terrorists do not have a voice online.
“We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area.
“We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges.”
The issue of encrypted messages on services such as WhatsApp was highlighted by Home Secretary Amber Rudd after the Westminster terrorist attack by Khalid Masood.
Masood’s phone connected with WhatsApp shortly before the atrocity.
But messages are exchanged over WhatsApp using so-called end-to-end encryption, which means messages are encoded so only the sending and receiving devices can read them.
Other proposals in the Tory manifesto include tougher sanctions for companies that fail to remove illegal content, as well as legislating for an industry-wide levy on social media companies to counter harmful activity online.
Speaking outside Downing Street, Mrs May said that “we cannot allow this ideology the safe space it needs to breed”.
She added: “Yet that is precisely what the internet, and the big companies that provide internet-based services provide.
“We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning.
“And we need to do everything we can at home to reduce the risks of extremism online.”
Twitter says it shut down 376,890 accounts linked to terrorism in the last six months of 2016.
Of these 74% were found by Twitter’s own software and only 2% as a result of requests by governments.