Rise of the machines: Is AI going to take over the world and can it save the planet?
That is according to artist, designer and academic researcher Dr Drew Hemment, from the University of Edinburgh, on the transformative potential of Artificial Intelligence or AI.
“There are incredible positives and benefits that are only possible with advanced computing and machine learning. But there are major negatives too.”
He describes the tools as “Janus-faced”, empowering people with new capabilities but also depending on centralised industries and extractive business models.
“AI is very different to human intelligence, and not at all like it is often portrayed in popular culture.”
So what is AI and how does it work?
AI is the name for computer systems that can perform tasks that are typically associated with humans, such as visual perception, speech recognition, decision-making, predictions and language translation. The systems are designed to learn and improve over time through the use of algorithms and data analysis.
“AI has been around since the 1960s, so it’s not new,” Hemment says. “But there has been a really big shift in the technology in the last 10 years or so in an area called machine learning.
“The advances we’ve seen recently have been made possible by two things – the availability of huge troves of data, plus advances in the algorithms and the software that runs these systems. In really simple terms, machine learning works by detecting patterns in vast datasets.”
If you are operating an AI system, what you do is ‘train’ the model or algorithm on data in the area you’re interested in. If the topic is environment and climate, this might be the huge amounts of data that are now being generated by modern earth observation satellites and ground monitoring stations and such like. If you worked in the public sector you might be training it on census data, while a lot of the generative AI apps are trained on data scraped from the public internet.
Traditional AI models operate through ‘supervised’ learning, where a human will go through all the data and label it so that the machine knows how to interpret it, whereas the latest use more advanced algorithms to perform unsupervised learning, meaning they can spot the connections and create the labels themselves and go on to generate predictions on their own. Some of these algorithms are very complex, for example in deep learning.
“Causing a real storm at the moment is generative AI,” says Hemment, “with a number of powerful new tools released over the past few months – such as text-to-image generators like DALL-E 2 and Midjourney, and the ChatGPT chatbot.
“Made possible by major recent developments in diffusion models and large language models, generative AI is fundamentally changing the ways we interact with machines. We can expect it to change the way we do things in all sorts of areas, from writing an email to generating an image.”
Can AI make the world a better place for people?
The professor, who heads up the New Real – a unique hub for AI, creativity and futures research, run in partnership between the University of Edinburgh, Alan Turing Institute and Edinburgh’s Festivals – thinks it can, with careful handling.
“We now have so much data, we need powerful AI to make sense of it,” he says. “Because of the computing power involved, these systems can spot connections in huge troves of data and so generate insights that the human brain never could.”
Because of its enormous capacity for collecting and crunching massive amounts of data quickly, AI opens up huge opportunities for science and research – the technology has already led to breakthroughs in climate services and genetics.
“Some of the outcomes of that are genuinely life-changing,” Hemment says. “In climate services, we can now make predictions not only over the coming days, but for the coming season or decade. In genomics, advances in protein-folding, for example, understanding the structure of protein in DNA, have been made possible by AI.”
What are the downsides?
Current AI works by finding patterns in vast tracts of data, which is basically a set of observations on the world. It’s all historical – a snapshot of the present day or the past. This means that AI systems can inherit biases from the data they are trained on, which can amplify discrimination against certain groups of people.
“When we’re talking about society, if we get data that represents society, it represents it warts and all,” Hemment explains. So it shows up the bias, and the injustices.
“What’s in the data record, what we have information on, is not inclusive, so white male heterosexuals tend to be more fully represented than other demographics."
Should we worry about losing our job to a machine?
There are genuine fears that these thinking machines will replace humans in many workplaces, causing job losses and significant social and economic disruption.
Some of these fears are “well-founded”, according to Hemment, but only up to a point.
“Looking back over the past several hundred years, we’ve had waves of industrialisation and new technologies. There have been fears of mass jobs losses and the machines taking over during each technological wave.
“But basically what has happened is that some jobs have become redundant but then a whole new set of jobs have come about that we couldn’t have imagined before.
“So we expect something like that. It will disrupt things but it’s not as black and white or as negative as people think.”
But he also warns of AI’s potential to amplify inequalities and centralisation, with most of the negative effects landing on those who are already most disadvantaged and the rewards staying in the hands of the world’s richest
“There is a sting in the tail,” he says. “It will create more jobs but also we may see more global inequity. Maybe many menial jobs will go, but these technologies also generate a lot of very low-value, underpaid work.
“The people who run the AI systems depend on ‘ghost workers’, hidden human hands, who have to find and mark for deletion stuff like rude or offensive language. Those people, sometimes called click workers, are paid a pittance, and it’s a really horrendous job – they have to sift through awful videos, offensive content and label it.”
So what are the pros and cons of AI for the environment?
The tech has the potential to both positively and negatively impact the environment.
AI can help tackle climate change, identifying carbon emissions that can be targeted for reduction, and aid optimisation of renewable energy schemes.
It can be employed to monitor and manage natural resources such as water, soil and air quality more effectively, helping to reduce waste and promote sustainable use, and to predict weather patterns way in advance to assist with farming and other important services.
It can be used to track and monitor wildlife populations, detect poaching and highlight areas where conservation efforts are needed, and to improve responses during natural disasters, potentially saving lives and minimising damage.
Operating AI requires a significant amount of energy, which could drive up greenhouse gas emissions – globally, the ICT sector is estimated to generate around the same level of greenhouse gas emissions as international aviation, three or four per cent of the total, so a lot.
The gigantic super-processors also get very warm when doing their thing, so require lots of water to keep them cool. Since the tech is often located in massive server farms in some of the most fragile parts of the world, this can be ecologically destructive.
What can’t it do?
Most of us have seen AI represented in popular culture – nobody could forget the disturbing HAL computer in Stanley Kubrick’s 1968 classic film 2001: A Space Odyssey or the rogue android with a mind of its own and deadly intentions in 2004’s I, Robot. And of course the new M3GAN, about a doll that develops self-awareness and will stop at nothing to protect its child ward.
So is the current generation of AI out to get us?
According to Hemment, we don’t need to worry just yet. “There is a lot of hot air about AI, and a common thing people get wrong is to anthropomorphise it,” he says, “interpreting these technologies as if they were people or attributing to them person-like qualities.
“Just to debunk the myth, what they’re not doing is thinking in the same way a human does.
“They don’t have intent, they don’t have ideas, they don’t have personality, they don’t have either good or negative intentions. They’re just generating outputs based on statistical reasoning.”
Want to join the conversation? Please or to comment on this article.