The new film Ex Machina, directed by Alex Garland, is a science-fiction thriller about a beautiful machine, Ava, thought by its creator to be the first truly human-like robot. Exploring the potential and threat of artificial intelligence, it is a cautionary tale that raises philosophical questions about the dangers of playing god, nature versus nurture, consciousness and what it means to be human.
It’s not the only film to address these issues in recent years. In Her, Joaquin Phoenix starts as Theodore Twombly, a lonely man who develops a relationship with Samantha – an intelligent computer operating system – voiced by a sensual Scarlett Johansson. They have a wonderful love affair, but break up when she develops into an operating system far smarter than he is, by a long way. Samantha overtakes Theodore in her intelligence and emotional ability – she falls in love with 641 others.
Both movies play with the idea of “the singularity”, a term coined by Ray Kurzweil, Google’s director of engineering, to refer to the moment when humans and machines will converge. It’s not a new concept. Alan Turing speculated about machines outstripping humans intellectually; and you can trace the idea back to the 18th century. But it has acquired an urgency – Kurzweil predicts it will happen within 15 years.
Almost every technical achievement in human history has been met with warnings about what it could lead to. Now we are fearful of robots – the new Frankenstein monster. The robots are coming, it is said, for our jobs, they will surpass our intelligence – something we have prided ourselves in for centuries – and they will even master human emotions.
CONNECT WITH THE SCOTSMAN
• Subscribe to our daily newsletter (requires registration) and get the latest news, sport and business headlines delivered to your inbox every morning
Dozens of scientists involved in the field of artificial intelligence, including physicist Stephen Hawking, recently signed an open letter from the Future of Life Institute warning that greater attention should be paid to the safety and social benefits of artificial intelligence. AI is marching so fast, they argue, towards humanlike intelligence, that we have not properly discussed its potential for good and ill. They flag up for consideration the impact of AI on jobs, and raise more existential questions about what happens when machines exceed humans in brain power.
Hawking is even more negative, telling the BBC: “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Scary stuff, especially from a scientist as capable as Hawking, who understands the benefits of computers. But we need a reality check. We are safe from the machines – for the foreseeable future.
We are already surrounded by robots and have been for decades. There are roughly 20 million and we use them every day. Most work unseen, behind the scenes. Any time you misspell a word when using the internet and the search engine asks: “did you mean…?” you are using a machine learning algorithm based on how humans misspell words. Likewise, internet translating systems. When it comes to translating technical documents – but not literature – they are much better than a human being. And computers can now thrash humans at chess.
Most supermarkets have automated checkouts. This may mean a human being didn’t get the job, but that’s not such a bad thing – it’s good, surely, to rid humans of the need to do menial or physically challenging work. Blaming robots for wider economic woes is what is often going on when concerns are raised about them replacing workers. More exciting is the self-driving car, coming soon to a street near you; though you still have to ask, where is the self-flying car?
Thus far, despite these developments, civilisation remains intact. Even with advanced systems, there is a big gap between what robots can do in science-fiction and what they are really capable of. Human beings can learn and they can generalise from that learning. Robots do not have that ability – they cannot synthesise new knowledge. As yet, they cannot understand meaning and context. Robots have very little social interaction and their senses are limited. Even clever robots find it difficult to recognise objects (hence barcodes in supermarkets), something so simple to us, and to hold them – they do not have our dexterity.
To really challenge our human qualities, robots would need to be self-aware and understand what it means to have agency. That is unlikely to ever happen. Ask Siri, Apple’s voice-activated personal assistant in the iPhone, “What is the meaning of life?” and she will not know what you are referring to. The kind of intelligence required to answer – and ask – this question, is something that a computer will not develop. The smart robot can tell you where you are located on a map, but not understand where you dream of being. Robots cannot make ethical, aesthetic, philosophical or political judgments. Judgments such as these cannot be programmed and are often social in their nature – we work things out together in a way robots do not and cannot.
The discussion about robots reveals more about how we see ourselves, than any threat they pose, showing up a current unease with technological developments, an anxiety that unique human qualities are threatened by them. There is a tendency to let fear run away with us when we think about robots. And there is tendency for us to forget past eras where disruptive change happened and we managed it – and it improved human lives.
These cultural anxieties are holding us back. Rather than saying, slow down, be careful, think again, as some are, we should ask: are we ambitious enough? At the moment we use robots to do stuff we already do a little faster. We should be thinking bigger. The robot is our friend, not foe.
SCOTSMAN TABLET AND IPHONE APPS