Data Capital: Taming the AI tiger that we've created

David Lee asks whether artificial intelligence will be humanity’s saviour or destroyer – and if AI itself can provide any of the answers
AI image of a tiger. Image: Adobe StockAI image of a tiger. Image: Adobe Stock
AI image of a tiger. Image: Adobe Stock

Existential threat to humanity – or its saviour?

The debate over the impact of artificial intelligence (AI) has burned more fiercely than ever since Geoffrey Hinton – the so-called Godfather of AI – lit the blue touch paper in early May.

Hinton, an alumnus of the University of Edinburgh, quit his job at Google and effectively dismissed his life’s work, saying he regretted his contribution to the AI field.

Professor Geoffrey Hinton. Photo by Julian Simmonds/ShutterstockProfessor Geoffrey Hinton. Photo by Julian Simmonds/Shutterstock
Professor Geoffrey Hinton. Photo by Julian Simmonds/Shutterstock
Hide Ad
Hide Ad

He left his role to focus on playing his part in warning of the dangers of AI to all our futures.

Not everyone was impressed with his approach including Dame Wendy Hall, the AI Skills Champion for the UK, who felt Hinton was “walking away when we probably need him most”.

Four weeks after Hinton quit, petrol was poured on the fire as a group of senior figures in the AI community issued an apocalyptic warning. A single-sentence statement signed by 350 executives, researchers and engineers in the field said AI really did pose an existential threat to the human race.

It said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

ChatGPT chat bot screen seen on smartphone and laptop display. Image: Adobe StockChatGPT chat bot screen seen on smartphone and laptop display. Image: Adobe Stock
ChatGPT chat bot screen seen on smartphone and laptop display. Image: Adobe Stock

The “risk of extinction” statement was organised by the Centre for AI Safety, a non-profit organisation. Signatories included both Geoffrey Hinton and Yoshua Bengio, another pioneer often referred to as one of the godfathers of modern AI.

This intervention was seen as more significant than an open letter in March, signed by more than 1,000 technologists and researchers calling for a six-month pause on the development of the largest AI models, highlighting concerns about “an out-of-control race to develop and deploy ever more powerful digital minds”.

This letter, organised by another AI-focused non-profit, the Future of Life Institute, gathered plenty of attention, especially as one of the signatories was Elon Musk. However, it did not have the same powerful cohort of AI-specific signees as the later single-sentence grenade.

Other signatories to the statement by the Centre for AI Safety included chief executives from three leading AI companies – Sam Altman of OpenAI, Demis Hassabis from Google DeepMind, and Dario Amodei of Anthropic.

Hide Ad
Hide Ad

These executives put themselves in the strange position of arguing that technology they are building – and racing to build faster than their competitors – poses severe risks and should be regulated more tightly.

Like any complex area, there is no agreement, even among acknowledged experts, of what the future holds. Asked by Sky News if she believed AI would kill us or save us, Dame Wendy Hall came down in the “save us” camp without hesitation.

She stressed the ability of AI to do enormous good in areas like medicine by processing, analysing and drawing conclusions from information far more quickly than human beings can ever achieve.

She said: “We’re now seeing how we can use AI to create better lives for people and equal playing fields too, if we get it right, for developing countries as well as developed countries. I think that’s so important. It will give us the power to do all sorts of things that we haven’t even dreamed of.”

Hall feels an existential threat from AI is unlikely, but not impossible – and that effective regulation is essential, saying: “It’s all about taming the tiger that we’ve created. We have to keep AI under our control so that we don’t become the slave to that master.”

She said it was vital to put a strong focus on technology used to detect fakes and for AI-generated information to be treated like any other –with a laser focus on whether it was accurate and trustworthy. This would mean much more research in universities on the “trustworthy, reliable, and responsible” use of AI.

Some in the field believe AI technology is still too immature to pose an existential threat, and are more focused on immediate issues, such as biased and incorrect responses from sources like ChatGPT, than longer-term dangers.

However, others argue AI is improving so rapidly that it has already overtaken human-level performance in some areas, and will soon surpass it in others.

Hide Ad
Hide Ad

This raises another huge question, about artificial general intelligence (AGI), the idea that AI can match or exceed human-level performance in a wide variety of tasks.

So what are the tactics to tame the tiger, as Wendy Hall put it? One key approach is for large AI companies to co-operate more closely – which won’t be easy –alongside more technical research into large language models and the creation of an international AI safety organisation.

Two University of Edinburgh experts are insistent that effective regulation is crucial – and needs to move faster.

Professor Chris Williams says: “Like all powerful technologies, AI has the potential for both good and bad. We certainly need regulation in these areas.

“One of the difficulties we face is just how fast things have been moving. But there is definitely a need to regulate these systems in various ways.”

His colleague, Professor Mirella Lapata, agrees, and has a specific suggestion: “There can be bad actors, and AI has to be regulated. We don’t have effective regulation, and therein lies the problem.

“Once you create text or images with generative AI, there should be a watermark that tells you that this was created by an artificial machine.”

When writing this feature, I wondered whether AI itself had the answers, so I asked ChatGPT. The latest version, ChatGPT 4, was “trained”on the internet as it was in late 2021, so lacks access to what has happened since.

Hide Ad
Hide Ad

One expert says it would be “phenomenally expensive” to continually update it.

However, the current version had some clear views, but in answering the three questions posed about the future of AI (see panel), ChatGPT used the word complex – or complexity –and challenging – or challenge – in every reply.

If AI itself cannot answer questions about the future of AI, perhaps we have not gone through the looking glass just yet. And maybe human beings still have much more of a role than some of the experts think.

This really is a complex and challenging area. Even the machines recognise that…

An endlessly complex and challenging conundrum – a bot gets existentially quizzed

Is artificial intelligence as big a threat to humanity as a pandemic or nuclear weapons?

ChatGPT: Comparing the threat of artificial intelligence to other risks, such as pandemics or nuclear weapons, is challenging and highly subjective. Each of these risks presents unique challenges and potential consequences.

The development and deployment of AI is a human endeavour, and its impact depends on how it is managed and regulated. Efforts are underway to ensure responsible AI development, including the establishment of ethical guidelines and frameworks.

Hide Ad
Hide Ad

In summary, while AI has the potential to pose significant risks, it is challenging to directly compare those risks with pandemics or nuclear weapons. Each represents a distinct category of risk with its own complexities and potential consequences. It is essential to address all these risks proactively and implement measures to mitigate their potential harms.

How long until an artificial general intelligence is created? ChatGPT: The timeline for achieving artificial general intelligence (AGI), which refers to highly autonomous systems that outperform humans in most economically valuable work, is uncertain. There is significant debate and speculation among experts about when AGI might be developed.

AGI represents a level of AI capability that surpasses current technologies. While narrow AI systems have made remarkable progress in specific domains like image recognition and natural language processing, developing AGI poses significant challenges. AGI would need to possess a broad range of general cognitive abilities, including abstract reasoning, common sense understanding, and adaptability across different tasks and contexts.

Predicting an exact timeline for AGI development is challenging due to various factors, including the complexity of human-level intelligence, the limitations of current AI techniques, and the need for breakthroughs in multiple fields such as machine learning, robotics, and computational power.

Estimates and opinions on timelines vary. Ultimately, AGI development is a complex and challenging endeavour, and it remains uncertain when or if it will be achieved.

Is global regulation of artificial intelligence possible, and is it desirable?

ChatGPT: The global regulation of artificial intelligence (AI) is a complex and challenging endeavour. Whether it is possible or desirable depends on various factors and perspectives. While it poses challenges, international cooperation and collaboration can help address ethical concerns, establish common standards, and promote responsible AI practices. Striking the right balance between regulation, innovation, and flexibility is crucial to ensure that AI is developed and deployed in a manner that benefits society while mitigating potential risks.