Driverless cars could decide who gets hit in crashes, lawyers warn

Driverless cars ' here being tested in Singapore ' could be programmed to make 'moral' decisions over who to hit in an accident. Picture: Getty
Driverless cars ' here being tested in Singapore ' could be programmed to make 'moral' decisions over who to hit in an accident. Picture: Getty
Share this article
Have your say

Driverless cars could be programmed to make “moral” decisions about who gets hit in a collision, lawyers have warned.

With autonomous vehicles expected to be on the roads within two years, legal experts have raised questions about the technology behind them.

And some have suggested that an “ethical system” could become as routine as choosing a paint colour.

Cars in the future may be programmed to weigh up the value of individual lives, according to a report from the Faculty of Advocates.

The submission was part of a consultation between the Scottish Law Commission and the Law Commission of England and Wales, with a three-year review of laws for self-driving cars.

One theoretical circumstance would be whether the car should hit Albert Einstein or a group of criminals.

The report said: “Persons generally are entitled to expect that a self-driving vehicle will not collide with and injure them. However, in reality, the situation is much more nuanced.”

The Faculty of Advocates envisages a future in which owners may even be able to choose the “morality” of their car.

The report added: “The purchaser might be able to specify the ethical system with which the car is programmed… as well as specifying the paint colour and interior trim.”

The legal implications to driverless technologies are considerable, the report claims.

But while they could be governed by automated programmes based on predictable algorithms, artificial intelligence experts are developing neural networks, systems which make their own decisions.

Experts believe that new offences are likely to be necessary to cover the systems set up by companies to control driverless vehicles, and to hold them to account in the case of errors, malfunctions and accidents.

The Faculty of Advocates said that even the suggestion that judgements could be made about the value of individual lives is feasible.

The “trolley problem” asks whether a person at the controls of a runaway trolley-car heading for five people on the tracks should pull the lever to switch on to a track where only one person will be hit.

An autonomous car might not have enough information to make a choice between a scientist and criminals. But the report said that could change in a country such as China, where the government is establishing a “social credit” score for its citizens.

It added: “We cannot conceive of any circumstances whatever where such a system could be regarded as acceptable in a free, open and democratic society.”