The device could lead to a new generation of prosthetic limbs, giving the wearer the ability to reach for objects without thinking, researchers said.
A camera fitted to the hand rapidly takes a picture of the object in front of it and feeds the information to an electronic “brain”. The computer then assesses the object’s shape and size and “within milliseconds” triggers the correct movements needed to pick it up, whether it needs a light pinch or firm grip.
A small number of amputees have already trialled the technology, which has been developed by a team at the University of Newcastle.
Dr Kianoush Nazarpour, a senior lecturer in biomedical engineering at the university, said: “Prosthetic limbs have changed very little in the past 100 years … they still work in the same way.
“Using computer vision, we have developed a bionic hand which can respond automatically. Just like a real hand, the user can reach out and pick up a cup or a biscuit with nothing more than a quick glance in the right direction.
“Responsiveness has been one of the main barriers to artificial limbs. For many amputees the reference point is their healthy arm or leg so prosthetics seem slow and cumbersome in comparison.
“Now, for the first time in a century, we have developed an ‘intuitive’ hand that can react without thinking.”
Every year in the UK around 600 people suffer the loss of upper limbs, half of whom are aged 15 to 54. In the US, there are 500,000 new upper limb amputees each year.
Current prosthetic hands are controlled by myoelectric signals – muscular electrical activity recorded from the skin surface of the stump.
Learning to operate these takes practice, concentration and time, Dr Nazarpour said.
Developing the new system, based on artificial intelligence, involved teaching a computer how to recognise the grip needed for different objects.
Lead researcher, PhD student Ghazal Ghazaei, said: “We would show the computer a picture of, for example, a stick. But not just one picture – many images of the same stick from different angles and orientations, even in different light and against different backgrounds. Eventually the computer learns what grasp it needs to pick that stick up.
“So the computer isn’t just matching an image, it’s learning to recognise objects and group them according to the grasp type the hand has to perform to successfully pick it up.
“It is this which enables it to accurately assess and pick up an object which it has never seen before.”
The team, whose work is reported in the Journal of Neural Engineering, programmed the hand to perform four different “grasps” suitable for picking up a cup, holding a TV controller, and gripping objects with a thumb and two fingers or a pinched thumb and first finger.