Robots have been around for years. In the 1980s we thought they’d be a big part of our daily lives 30 or 40 years hence, and in reality they are.
As human companions or as caring and helping androids that may soon drive your car, assist surgical procedures and deliver parcels via drone technology, robots are here to stay.
There is no doubt that these developments make our lives easier, safer and more comfortable but they also spark concerns as to what happens when robotic technology fails – resulting in economic loss, property damage, injury, or even loss of life.
Just recently, a factory worker at a major motor manufacturer died when a robot crushed him against a metal plate. This accident may have been due to human error but investigations continue. And although robots have been used for decades, especially in manufacturing where production lines utilise robots for efficiency and employee safety, what remains to be established is what happens to legal liability in a world increasingly reliant on robotics and artificial intelligence (AI) – who is liable if there is an error in the coding or the technology fails causing damage and injury?
On the production line, the risk of personal injury and even death cannot be discounted should a robot functioning close to operatives react in an unexpected way. Alongside the risk to humans, the interruption to the business could also be extensive – and expensive. Most firms will of course hold insurance for such eventualities, but if the breakdown of the robot was caused by those who built and designed it, they could be liable to meet the cost of any claims made.
Driverless vehicles are coming closer to being a reality, but even before this enters our lives, features such as autonomous emergency braking, adaptive cruise control, lane-keeping assist and automatic parking are already aiding drivers in the prevention of road traffic accidents, but they present particular complexities when it comes to legal liability. In case of a fully autonomous functioning vehicle that is capable of driving unaided, the manufacturer may well be liable if an accident occurs.
Robot-assisted surgery also brings with it many practical benefits, allowing procedures to be performed more precisely and reducing stress and fatigue experienced by surgical teams. But growing fears about the risks of robotic surgery have recently been confirmed by a US study that linked at least 144 deaths and more than 1000 injuries over a 14-year period to such surgery.
While manufacturers usually provide product and technical training, the question prevails whether this training is sufficient to prepare doctors to perform surgery with the assistance of a robot? If something goes wrong, a claim could rest against the programmer, the manufacturer, the hospital, the health professional themselves, or a combination thereof. This uncertainty could manifest itself in lengthy and costly court battles where liability is batted around between the various parties.
The nature of the risks associated with robotics and AI are nothing new, but they are likely to present in new and novel ways. Those seeking to underwrite such risks need to fully comprehend the sheer size and scale with which they could manifest.
Detailed risk assessments could well become commonplace with policyholders being asked to demonstrate that they have adequate governance and controls surrounding their use of robotic technology. Commercial policyholders may be asked to provide training records evidencing that employees coming into contact with robots on both a daily and a casual basis are sufficiently trained and protected.
An exciting and bright future lies ahead but we should not rush into embracing new technologies at the detriment of adequate risk management if we are to avoid sci-fi type disaster scenarios and the costly claims that they could bring.
Douglas Keir is a partner specialising in liability and injury claims and heads up the Scottish insurance team at Weightmans (Scotland) LLP