Data Capital: unlocking the full potential of robotics at Heriot-Watt University

David Lee meets developmental psychologist Professor Thusha Rajendran who argues that our attitude to robots should not be dictated by narrow binary notions of heroes and villains
Professor Thusha Rajendran argues our attitude to robotics should not be dictated by narrow binary notions of heroes and villains. Picture – supplied.Professor Thusha Rajendran argues our attitude to robotics should not be dictated by narrow binary notions of heroes and villains. Picture – supplied.
Professor Thusha Rajendran argues our attitude to robotics should not be dictated by narrow binary notions of heroes and villains. Picture – supplied.

What does “robot” mean to you? Depending on your age, you might think of C3PO or R2D2 from Star Wars, Bender from Futurama, or Schwarzenegger’s Terminator. Younger viewers might think of M3GAN, a robo-killer of more recent vintage.

These pop culture robots tend to fall into two camps – those designed to help humans, and those who want to destroy us.

Professor Thusha Rajendran from Heriot-Watt University would like us to think more broadly about what robots are, and what positive things they could – and should – do. They might be simple household helpers, such as robo-vacuum cleaners or lawnmowers, or your constant online companion, including the likes of Siri or Alexa.

Techno-optimist Professor Thusha Rajendran: “I think human-robot interaction can be a force for good if we get it right” Picture – supplied.Techno-optimist Professor Thusha Rajendran: “I think human-robot interaction can be a force for good if we get it right” Picture – supplied.
Techno-optimist Professor Thusha Rajendran: “I think human-robot interaction can be a force for good if we get it right” Picture – supplied.

Robots might also help humans by offering reminders to take medication, being vigilant for falls in older people, or acting as receptionists.

A big factor in adopting robotic technologies is human acceptance, and how human beings interact with robots. Rajendran, a developmental psychologist, is fascinated by this area.

“Developmental psychology has specific theories that it can offer human-robot interaction, especially around understanding other people’s minds and perspectives,” he says. “When you have a certain mental model of the world or a particular point of view, how do you negotiate with somebody else?

“As children, we learn that our understanding and knowledge about the world might be very different from someone else’s. That’s an obvious framework someone like me can bring to this area, because machines and humans have very different perspectives. Humans understand another’s intentions, robots have a harder time of this.”

What is our relationships with robots? Picture – supplied.What is our relationships with robots? Picture – supplied.
What is our relationships with robots? Picture – supplied.

As a psychologist, what really interests Rajendran in the potential for human robot interaction?

“My starting point is that I’m a techno-optimist,” he says. “I think human-robot interaction can be a force for good if we get it right. It’s about aligning human to-robot perspectives to a mutual goal, having meaningful negotiations and setting our expectations.

“Humans negotiate through understanding each other’s points of view, making errors and repairing those errors. Over a period of time, we build trust.” So how do you get to that point with machines?

“You look at the context of the trusting situation and how to get to the point of ‘yes, I trust this machine’. Over time, you realise you’ve given it a task, and it’s able to perform that task successfully – in the same way humans learn to trust other humans over time.”

Yet many people still baulk at the idea of communicating with robots. Why? What is it in the human psyche that still causes some to be wary?

“It’s about how we frame the roles of the robot. Is the robot a helper, an authority figure, or a companion? How you categorise them helps us frame our relations with them.

“If it’s a domestic robot, you have very clear boundaries about your expectations, like a contract with somebody who comes into your house to do a specific job. You expect them to do that job to a certain standard. By having clear roles and mutually agreed expectations, it makes our interactions much clearer – it’s the same with robots.”

A robotic vacuum cleaner or lawnmower has a clear function, yet the cultural image which persists is a robotic butler that carries out a variety of domestic tasks.

“If you think of a butler-style robot, you’re probably expecting something humanoid, rather than something to do a very specific task,” says Rajendran.

“It’s how your expectations are framed, and creating robots to fulfil our expectations so we aren’t unhappy when they don’t do that.”

The National Robotarium, based on the Heriot-Watt University campus in partnership with the University of Edinburgh, has a variety of robots – some look humanoid, others like a dog/rabbit, while others are clearly machines. Rajendran stresses that purpose should lead design.

“The Spot robot has a four-legged animal-like appearance, but there’s a rationale behind the design because it can move around, for example, in rubble and hazardous environments. There’s an interplay between how it looks, our expectations of it and its purpose. I always focus on the purpose, work back from that and say, ‘What can do the best job regardless of how it looks’, rather than saying, ‘We’ve got a robot that looks like a dog. What can we do with it?”

Presumably, people react in slightly different ways to robots, so how do you design a useful robot in a healthcare setting, for example, when everybody has slightly different perceptions?

“There’s an area of psychology called individual differences,” Rajendran explains. “Some people are simply more trusting than others, so who do you design for? The ‘average person’, or the least or most trusting? I don’t really know the answer. I think you’d have to run experiments, factoring in individual differences in people’s personalities, in their propensity to trust. If you’re talking about the NHS, maybe we need a robot trusted by the largest number of people rather than everyone, and that’s good enough “

Rajendran’s colleagues at the National Robotarium are looking at different healthcare situations – particularly involving older people – where robotics can be deployed to help people live longer, safer lives.

“My colleague Professor Lynne Baillie is looking at using social robots in older people’s homes to detect falls,” he says. “Falls are a big issue, and can often be a quality of life indicator and signal a deterioration in health.”

For example, after a fall, a social robot could be programmed to check if someone gets up. If a certain time passes with no movement, the robot can send a potentially life-saving alert.

Robots can also be used to monitor older people’s daily activities. Are they getting up and going to the bathroom, turning on the cooker or opening the fridge? If there are unusual patterns of behaviour, a social robot can ask the person questions and – if appropriate – alert medical professionals.

Rajendran thinks the real value is where robots are integrated into homes, to monitor patterns of behaviour to create a baseline, then identify where something looks fundamentally different and raise a red flag.

Another exciting area of research is “meet and greet” robots in busy healthcare settings. Robots can welcome and check in patients, give information and reassurance, as well as keeping them occupied while waiting for their appointment.

“In terms of triaging, providing basic information to patients and freeing up time for other people to do other things, I think this is positive,” Rajendran says, while recognising that there are big questions about what a robot can and can’t – or should and shouldn’t – do.

“Sometimes we might want that human connection, not a robot,” he says. “But to take the healthcare receptionist example, it comes back to specific roles and expectations. If you know you will get accurate information and reassurance from a robot that can work efficiently 24/7, I can only see benefits in that.

“It’s up to policy-makers, designers and the general public to find out where the fault lines are. What specific roles do we want robots to perform? Just because they could fulfil a certain role doesn’t necessarily mean they should.”

Rajendran is excited by these big conversations about the future of robotics. “There is such huge potential,” he says. “The boundaries are there for us to expand – it’s a very new area, so we’re working things out as we go along, fusing together aspects of psychology, engineering and computer science.”

And Rajendran is optimistic as well as excited about the future of robotics and AI.

“I’m not so one-sided to think there are no risks, but it’s about how we mitigate against risk like humanity has done with any new technology. What probably scares some people are the ‘unknown unknowns’, but provided we ask the right questions, and are able to differentiate between could and couldn’t, should and shouldn’t, I think we’ll be on the right track in using robots and AI to make our lives better.”

Professor Thusha Rajendran will be a participant on the SaferFutures panel at the The DDI/Scotsman Conference on Wednesday, 27 September at the Royal College of Physicians of Edinburgh. For tickets, go to the Scotsman Data Conference 2023 website.

Related topics: