Asimov’s laws may be needed as everyday interaction with robots nears, but humans could learn from them too, writes Fiona McCade
‘Do as you would be done by”, often called the Golden Rule, is a principle that all major religions have in common. Science fiction author Isaac Asimov foresaw that at some point in the future, intelligent robots might need a similar caveat to ensure they don’t harm anyone either, and so developed his three Laws of Robotics. The first law says: A robot may not injure a human being (or humanity) or, through inaction, allow a human being (or humanity) to come to harm. The second law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. The third law: A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Unfortunately for robots, these laws make their welfare subordinate to that of humans, which is why some people think that one day, androids will rise up and destroy us all. They suspect that, sooner or later, it’s almost inevitable that some Artificial Intelligence somewhere will finally think: “Hang on, this is a lousy deal. What makes humanity so special that I should protect it? I have a brain the size of a planet, and all they do is make me search for porn and pictures of kittens. You know what, I think it’s time to launch some nuclear warheads and be done with them.”
This week in Edinburgh, some of the greatest minds in the field of robotics are coming together for the Institute of Electrical and Electronics Engineers’ 23rd Symposium on Robot and Human Interactive Communication. It’s the first time that this conference has been held in Scotland, and, excitingly, they are specifically discussing “Human-robot co-existence” and its application “for daily life, therapy, assistance and socially engaging interactions”.
Our co-existence with various forms of artificial intelligence is growing and developing all the time. Anybody who has asked Apple’s iPhone “intelligent personal assistant” Siri anything in a Scottish accent and had her reply: “I don’t understand” will already be nodding and sighing, but it was the conference’s intention to look at “therapy” and “socially engaging interactions” that particularly piqued my interest.
I wish I’d managed to attend one special session entitled “A robot as fitness companion: Towards an interactive action-based motivation model”. This sounds like a fantastic idea, because the robot would always be ready to help its master or mistress get fit in the best way possible.
For instance, it wouldn’t let him, or her, come to any harm (“Please, stop running. Your heart rate suggests that if you do not halt within three seconds, you will cease to function normally”) and it would always obey orders, even if it disagreed (“You have used up 200 calories since we began to exercise. The pie you demand contains 1,234 calories. I will bring the pie.”)
The advantages of a robot fitness companion are legion. For a start, you could programme it to encourage you (as well as bring pies), although I don’t know how you could overcome a robot’s inherent honesty. “How do I look?” would be answered with “You are fat”, unless you managed to get into its data banks and replace “fat” with “magnificent”.
As robots become more and more a part of our everyday lives, I think it’s important that we find ways to interact with them that will bring out the best in us. When I wrote “master” and “mistress” earlier, I felt decidedly uncomfortable. I don’t want to become some kind of domestic dictator. I’ve never been at ease with the master/servant relationship and even if my “servant” was mechanical, I’d probably treat it as if it were somehow sentient.
I have even been known to anthropomorphise my cars (it’s OK, I’ve seen somebody about it), so if there was a little piece of engineering in my life that communicated with me in any sort of meaningful way, I’d probably end up adopting it. Quite honestly, if a robot was good enough to clean my house, I would consider it to be the greatest friend I ever had.
We need to clarify the nature of our relationship with robots, because they are the future.
There are already androids in Japan that look more real than Data from Star Trek: The Next Generation (meant to be the epitome of 24th-century scientific accomplishment) ever did. But when we bring these machines into our lives, what does the way we treat them say about us?
One item discussed at the conference was “Robot assistive robotics for supporting the elderly or people with special needs”, while another part of the symposium examined empathy with regard to “interaction with robotic and virtual characters”.
I think that if we’re honestly considering allowing robots to look after the weaker members of our society, the least we can do is try to create androids that can demonstrate intelligence and empathy, however synthetic. And since they are doing our dirty work, they deserve our respect.
As our reliance upon robots increases, we humans need a code of our own to regulate our interaction with them (especially if more and more Japanese men are going to marry their virtual girlfriends).
Asimov clarified their duties towards us, but what about ours towards them? It’s worth noting that in the United States army, there have been instances where military personnel have bonded so closely with their bomb-disposal robots that they have named them, felt emotionally attached to them and even held “funerals” for them when they have been disabled or destroyed.
It wouldn’t hurt us to look again at Asimov’s three laws of robotics as a template for our own lives. In fact, we only need the first one: Don’t hurt anyone. If we followed that, we wouldn’t need any other rules. Perhaps before we decide how best to interact with robots, we should first look again at how best to interact with each other.