'Godfather' of artificial intelligence technology who studied at University of Edinburgh warns of dangers of AI

A graduate of the University of Edinburgh who is regarded as the godfather of artificial intelligence (AI) has quit his job at Google, warning of the dangers of the technology.

Dr Geoffrey Hinton, 75, said he believed while AI chatbots were “not more intelligent than us” at the present time, “they soon might be”.

AI technology has developed quickly in recent months, with the creation of chatbots such as ChatGPT and Google’s Bard. Sectors from education to medicine are beginning to use the technology. However, fears have already been raised the use of AI could lead to the loss of jobs, as well as spreading misinformation.

Hide Ad
Hide Ad

Dr Hinton, who completed a PhD in artificial intelligence in Edinburgh in 1978 for research supervised by renowned academic Christopher Longuet-Higgins, warned of “bad actors” who could potentially use the technology for “bad things” – something he described later as a “worst-case”, “nightmare” scenario.

In an interview with the New York Times, where he said he had left his post at Google to be able to “freely speak out about the risks of AI", Dr Hinton said he regretted some parts of his work.

He said: “I console myself with the normal excuse – if I hadn’t done it, somebody else would have.”

Dr Hinton said he had underestimated how quickly the technology would develop.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

He added: "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning.

"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

Dr Hinton, who was born in Wimbledon and completed his undergraduate degree at Cambridge University before studying for his PhD in Edinburgh, took up his first academic post at Sussex University. He subsequently moved to work at universities in North America after struggling to find funding for his work in the UK.

Hide Ad
Hide Ad

From 2013 to this year, he divided his time between working for Google on its Google Brain project and the University of Toronto.

Dr Hinton said it was impossible to see whether countries or companies were working on AI in secret – unlike nuclear weapons. He said scientists should collaborate on their understanding of AI, before allowing it to expand further.

“I don’t think they should scale this up more until they have understood whether they can control it,” he said. Dr Hinton added: “It is hard to see how you can prevent the bad actors from using it for bad things.”

Asked to elaborate on that comment, Dr Hinton said: "You can imagine, for example, some bad actor like [Russian president Vladimir] Putin decided to give robots the ability to create their own sub-goals."

He warned this eventually might "create sub-goals like 'I need to get more power’”. The researcher added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

"And all these copies can learn separately, but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

However, he did praise his former employer, Google, for being "very responsible".

Hide Ad
Hide Ad

Google's chief scientist Jeff Dean said: "We remain committed to a responsible approach to AI. We're continually learning to understand emerging risks while also innovating boldly."

In 2018, Dr Hinton was one of the three recipients of the AM Turing Award – often referred to as the "Nobel Prize of Computing" - along with Professors Yoshua Bengio and Yann LeCun, two other proponents of deep learning, a popular form of AI. The award is for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing and comes with a $1 million [£800,000] prize, provided by Google. The prestigious award is named for Alan M Turing, the British mathematician who articulated the mathematical foundation and limits of computing.

In 2015, Dr Hinton said in an interview he did not fear a hostile attack on humanity by AI, though he acknowledged there was still "a lot to worry about".

Sir Tim Berners-Lee, the British inventor of the World Wide Web, won the Turing Award in 2017. In March, after OpenAI released a new version of ChatGPT, more than 1,000 technology leaders and researchers signed an open letter warning AI technologies pose “profound risks to society and humanity”. They called for a six-month moratorium on the development of new systems.

However, many businesses regard the technology as a major growth opportunity.

Comments

 0 comments

Want to join the conversation? Please or to comment on this article.