Comment: Compassionate robots not too far away

Robot carers are already to be seen in the world of science fiction. Frank Langella as the ex-convict Frank was cared for by an AI in the film Robot and Frank. Picture: contributed

Robot carers are already to be seen in the world of science fiction. Frank Langella as the ex-convict Frank was cared for by an AI in the film Robot and Frank. Picture: contributed

0
Have your say

ARTIFICIAL intelligence is making steady progress and could help to teach human carers, writes Jon Oberlander

Artificial intelligence continues to make steady strides. In recent years we’ve seen IBM’s Deep Blue computer beat world chess champion Garry Kasparov; another IBM system – Watson – won a TV quiz show; Google DeepMind’s AlphaGo has recently come out on top in the game of Go. We also see self-driving cars motoring out of laboratories and onto the streets. So what next?

In the current climate, robot carers are a staple in serious research grant proposals, as well as science fiction. They can take a variety of forms: advanced humanoid robots, cuddly robot pets, or all-seeing smart homes, studded with cameras and microphones. The idea is that they can assist people who have physical or cognitive problems, for instance helping an elderly person continue to live a relatively independent life in their own home. Such robots can perceive and respond to people’s emotions and moods, as well as their physical needs.

But can these robots actually be compassionate? This was one of the questions tackled at a workshop on the science of compassion I recently took part in, jointly organised by Edinburgh and Stanford Universities in California. Compassion is core to our shared humanity, and sees us respond to suffering by going out of our way to help our fellow humans.

My answer is that – for now – robotic care is a contradiction in terms, but that there is a silver lining in that cloud.

The main thing is that there’s a big problem with current robots. Their goal is to act compassionately: to perceive and respond suitably to emotional and physical needs. To do this, an artificial system does not need to actually have its own emotions. And that’s the trouble. We rightly criticise nurses and doctors if they just act as if they were compassionate, without feeling. In fact, we say that they don’t really care; they are just “robots”, if they are just going through the checklist, without feeling. If “doing the right thing” isn’t enough for humans, how could it possibly be for artificial intelligences?

We would need at least two things, both a little tricky.

First, we need devices with the internal analogues of emotions. Beyond that, they may have to be able to reflect on what having the emotion means. At the workshop in Stanford, Edinburgh’s principal, Professor Tim O’Shea, whose own research interests lie in machine-learning, coined the term “artificial compassion” to cover what this kind of system might have to achieve. It’s like human compassion in the same way that machine learning is similar to – but not identical with – human learning.

Secondly, to perceive and attribute compassion, we depend on a fundamental recognition of the joint humanity of carer and caree. My colleague Henry Thompson points out that to develop moral agency, we require co-participation in a range of social contexts. It’s true that we allow children into these contexts as they grow up, as a means of teaching them moral values. But we do that because we know it works: we were once just like them, and we made it. So a robot would also have to be accepted into similar contexts.

It’s not impossible to build robots that have artificial emotions and look human enough - Cynthia Breazeal’s Kismet robot is a great example. But acceptance into the broader community may prove even harder, because humans are still not great at extending a welcome to incomers who look a bit different. And that acceptance is critical. So there is a long road from where we are now to robots that would be accepted as genuinely compassionate. But there is a consolation nearby: current generation artificial intelligence could be ready to help teach and cultivate compassion.

We already see intelligent tutoring systems with minimal intelligence being used to train individuals with social skills deficits, so as to learn how to deal better with other people. Such a system encodes best tutoring practice based on human experience, without having had any of those experiences itself.

We can definitely develop intelligent tutoring systems that explore scenarios, advise, and make trainee human carers think harder, and feel harder. Such systems would be interactive compilations of human experience. And in that respect, they would follow in the footsteps of IBM’s Deep Blue and Watson, and Google DeepMind’s AlphaGo. Is the time ripe for Deep Compassion?

• Professor Jon Oberlander is Director of the Data Technology Institute at the University of Edinburgh www.ed.ac.uk

Back to the top of the page