Do we need a 'right to reality' to navigate through our fake news world? (cloned)

Professor Lilian EdwardsProfessor Lilian Edwards
Professor Lilian Edwards | The University of Edinburgh (Data-Driven Innovation Programme)
A leading academic has suggested that a new 'right to reality' is needed to help people navigate their way through a complex online world where the lines between truth and lie are increasingly blurred.

Professor Lilian Edwards says we are all operating in a "miasma of uncertainty" where it is extremely difficult to tell the difference between truth and falsehood.

In an episode of The Scotsman's Data Capital podcast, the internet law expert says: "I think people increasingly feel they're in a world where they don't quite know what's true.  I think this feeling began with the emergence of political deep fakes, which have not really convinced anyone who didn't want to be convinced, but have given a general layer or uncertainty.

"You're constantly making these judgement calls as to reality with very little to draw on to validate your hunches."

Professor Edwards says the miasma of uncertainty has been exacerbated by the growing influence of Large Language Models (LLMs) like ChatGPT, especially their integration into search engines.

"It always seemed to me insane to incorporate them into search engines by default. Search engines are something you want to give you accurate data, and ideally accurate data that you can verify yourself. The last thing you want is summaries from a machine that makes up lies for a living."

Prof Edwards says it is clear that LLMs are "prone to hallucinate" and bad at maths - and have given very wrong advice, such as identifying poisonous mushrooms as safe to eat, recommending glue as a pizza topping and advising someone to eat a rock.

She continues: "Most of these are pretty blatant, but they will get less blatant. As people begin to trust these systems more, or to not know that they are there, as they become seamlessly integrated into our search engines and our customer service portals, you begin to think 'Is this real?'

The professor suggests human beings are "not very good at spotting false information" (which goes back centuries in terms of trials and witness behaviour) and now find themselves in a "deepfake arms war where means of detection are constantly trumped by people creating fakes".

She also discusses the difference between "people just saying inflammatory things, which is something we dealt with in the whole of human history, and people creating fake reality".

The problem, she argues, is that the platforms for fake information are mostly in America, which makes it hard for the UK and Europe to legislate and regulate. "What's going on right now is a 'standoff of words'," she says, arguing that Elon Musk is "stress-testing how far he can go under new legislation" in Europe.

But Prof Edwards insists that Elon Musk and Donald Trump are in a unique place: "They are outliers in this very strange period in history we're going through in which people seem to be denying the rule of law."

The podcast also hears Prof Edwards make a number of suggestions about trying to find our way through the miasma of uncertainty.

She said there were various options to challenge blatant deepfakes like the AI-generated video from August which purported to show music megastar Taylor Swift endorsing Donald Trump's Presidential bid.

This could include using libel laws, data protection or even infringement of intellectual property rights.

When it comes to tackling disinformation on specific online platforms, Prof Edwards said sanctions could include removing advertising or working with payment providers to encourage them not to make payments to sites seen to spread disinformation.

She is also very interested in the potential of watermarking documents to show they are AI-generated, which is beginning to take hold through innovations like C2PA (Coalition for Content Provenance and Authenticity). The European Union's AI Act calls for labelling of AI-generated content. Prof Edwards says this could be really useful in areas like art and copyright law, allowing people who have created original works to prove something is - or isn't - AI-generated.

"[In this context], watermarking could be quite effective. For trying to prove what is real and what is true, it doesn't map.

“The fact that something is AI generated does not mean it's fake. The fact that something is being generated without AI does not mean it's true."

Prof Edwards says one other hopeful sign, from her perspective, is the desire of trusted news brands (like the BBCor Washington Post, to ensure that what appears under their name is true, and can be proved to be true.

"They have a reputation to keep up, which will be destroyed if they become homes for fake news or fake images or videos. I know that's why they were really worried content would start to get out there with their brand attached. That's why they were so keen to have this opportunity to put in a watermark that would indelibly say, for example, it was from the BBC, or it wasn't."

Dare to be Honest
Follow us
©National World Publishing Ltd. All rights reserved.Cookie SettingsTerms and ConditionsPrivacy notice