Why humanity is 'not in danger' because of artificial intelligence
"Imagine an invite for a dinner party: you will compose this invite and just say, ‘invite my close group of friends’ and you would have a fully formed email,” says the professor in the School of Informatics at the University of Edinburgh. “A lot of menial writing tasks right now that people have to do over and over again, composing emails, things like that, will be very much facilitated by AI language technology.”
Most people are already familiar with automatic email response technology through a fairly rudimentary form of artificial intelligence (AI): the option of a short automatic response on an email server, which appears to be getting more intelligent by the day.
Advertisement
Hide AdAdvertisement
Hide Ad"Will do, thanks.” suggests a Gmail bot in response to a work message request. Or, in case the author is feeling more enthusiastic, “Received and understood!”.
Prof Lapata believes the technology will soon be there for a more extensive option, already foreshadowed by the likes of ChatGPT, which can produce natural-sounding text and also interact in a conversation.
Earlier this week, “godfather” of AI, Professor Geoffrey Hinton – who earned his PhD at Edinburgh in the 1970s - quit his job at Google to allow himself to speak freely about the danger of AI.
He warned that in the hands of “bad actors”, AI had the potential for “bad things”. He said he believed while AI chatbots were “not more intelligent than us” at the present time, “they soon might be”.
Prof Lapata is less concerned.
"I don't think humanity is in danger because of AI,” she says. “We have to put things in perspective. I do think there is potential to get out of hand if you have bad actors. But I don’t think it can get out of hand accidentally.
"I am hopeful that we will develop technology to be able to detect more future so that they [hostile nations] cannot misuse the technology deliberately.”
Others are, like Prof Hinton, more cautious – raising concerns of misuse, the distribution of false information. Workers in many industries fear the rise of AI could lead to the loss of jobs, while experts have previously raised fears that AI chatbots could be used to produce plagarised work by school and university students. Celebrities and influencers have also begun to use the technology to produce their social media content – which their audience believes is created by them.
Marijus Briedis, a cybersecurity expert at technology firm NordVPN, says AI could be used to make spoof phone calls by fraudsters appear authentic, by cloning voices of friends and family members.
Advertisement
Hide AdAdvertisement
Hide Ad“Worryingly, advances in AI technology like voice cloning, which can imitate the sound of relatives, are tailor-made for spoofers to make future scams even more convincing,” he says.
Earlier this week, Britain’s competition watchdog, the Competition and Markets Authority (CMA), said it is to launch a review of the AI market including popular chatbots such as ChatGPT. It said it will look at the opportunities and risks of AI, as well as the competition rules and consumer protections that may be needed. It is to focus on the implications of competition for firms and consumer protection.
CMA chief executive Sarah Cardell said: “AI has burst into the public consciousness over the past few months but has been on our radar for some time.
“It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth.”
She added: “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information."
The US Federal Trade Commission earlier this week also alerted the industry this week, saying it was “focusing intensely” on how the technology is being used by firms and the impact it may have on consumers.
Meanwhile, on Wednesday, former UK government chief scientific adviser Sir Patrick Vallance told MPs on Westminster’s Science, Innovation and Technology Committee that AI could have as big an impact on jobs as the industrial revolution.
Prof Lapata, however, believes there are major benefits to society in using AI.
Advertisement
Hide AdAdvertisement
Hide AdShe points to a recent study which tested AI chat bots against real doctors when answering patients' questions.
The study, published in the journal JAMA Internal Medicine, used data from Reddit’s AskDocs forum, in which members post medical questions that are answered by verified healthcare professionals. A selection of 195 exchanges from AskDocs, answered by a verified doctor, were then posed to ChatGPT, which was asked to respond.
A panel of three licensed medical professionals, who did not know whether the response came from a human doctor or a chatbot, rated the answers for both quality and empathy.
Overall, the panel preferred the bedside manner of the response from Chat GPT 79 per cent of the time. ChatGPT responses were also rated good or very good quality 79 per cent of the time, compared with 22 per cent of doctors’ responses. Meanwhile, 45 per cent of the ChatGPT answers were rated empathic or very empathic compared with just five per cent of replies given by doctors.
“I think AI is an opportunity,” she says. “There are people who say we cannot substitute a physician and they are obviously right. But in health, there is way too much data than a single doctor or human being can digest. Imagine a scenario where the GP practice has longitudinal data about a patient and can make predictions about a patient – and together with a doctor can do personalised medicals.”
Prof Lapata is confident AI will become part of society.
"We need to educate the public,” she says. “But it will take some time.”
Comments
Want to join the conversation? Please or to comment on this article.