Iain Mitchell: The use of AI gives rise to huge potential legal issues

Maybe I can find a programmer to blame if software causes me harm, but how do I deal with the mysterious black box, asks Iain Mitchell
Iain Mitchell, QC, is a member of the Faculty of Advocates and Chair of the Scottish Society for Computers and LawIain Mitchell, QC, is a member of the Faculty of Advocates and Chair of the Scottish Society for Computers and Law
Iain Mitchell, QC, is a member of the Faculty of Advocates and Chair of the Scottish Society for Computers and Law

In October, 2017, Saudi Arabia conferred citizenship on Sophia. The puzzling thing is that Sophia is a robot, even more human-looking than the archetypal female robot in Fritz Lang’s expressionist masterpiece, Metropolis. So, is Saudi Arabia leading the world in giving legal recognition to cyber life forms or is it a triumph of hype over reality?

Sophia was developed and built by Hong Kong-based company Hanson Robotics, headed by David Hanson, a former Disney “Imagineer”, who created performing mannequins such as the singing simulacrum of Jack Sparrow in Pirates of the Caribbean. With that pedigree, you might expect her to look, and even sound convincing. According to Hanson, she uses artificial intelligence, visual data processing and facial recognition technology to answer questions, make conversation and imitate human gestures and facial expressions. So, does she have artificial intelligence?

Hide Ad
Hide Ad

It’s a very slippery and imprecise expression, “Artificial Intelligence”. According to the Oxford English Dictionary, it isn’t even a thing, but, rather, a field of study.

Yet, everywhere we look, people are using “AI” to describe, well, what? If you are in the business of selling software for running an office, then AI is an on-trend way of making the stuff you are trying to sell sound really hi-tech. Ironically, even when applied to rather humdrum products, the description is not inaccurate: at its widest, AI can be used to describe the running of any computer program.

At the heart of the running of all computer processes, whether a program on a general-purpose computer or the functioning of a narrowly specialised device, like an automatic door-opening sensor, is the use of algorithms, sets of rules to be followed in calculations or other problem-solving operations. Run an algorithm on a computer and that’s AI.

However, when most people use AI, they are thinking of either complex algorithms, capable, for example, of sifting job applications, or even what are called “neural networks”, which are closer to what we think of as robots. Essentially, neural networks are systems educated by means of a dataset, as with the Go-playing program, Google Go, or even educate themselves by figuring out all the combinations from the basic rules of the game, as with Google Go Zero. The problem about such neural networks is that, unlike a conventional program where the code can be analysed and understood, neural networks are black boxes: even the people who create them cannot determine how they make their decisions.

The all-pervasive use of AI in general, the particular opacity of neural networks, and the risk of flaws and biases in the datasets used to train such networks give rise to huge potential legal issues. What happens if an AI system goes wrong? What do we mean by “wrong” anyway? What if medical diagnosis software fails to pick up that I am suffering the early stages of cancer? What if I am discriminated against by an AI system when I apply for a job? Can AI be used to sift Big Data and end up infringing my human rights?

Maybe I can find a programmer to blame if conventional software causes me damage, but how do I deal with the mysterious black box?

These are the sorts of problems which lawyers are only now beginning to address. The Law Commissions for England and Wales and for Scotland are in the middle of a joint consultation on legal issues arising from the use of self-driving vehicles. The European Commission has a high-level expert group looking at the legal implications of AI, and is looking at the reform of product liability law to take account of AI systems. The IT industry is getting concerned with ethical issues arising from AI.

Closer to home, the Faculty of Advocates, along with the Association of European Lawyers, the Scottish Society for Computers and Law, Edinburgh University’s SCRIPT Centre and the British Computer Society, is hosting a conference in the Faculty’s Mackenzie Building on 31 May, AI beyond the Hype, looking at legal and ethical implications of AI.

Hide Ad
Hide Ad

It will be interesting to see what the experts make of it all, though one thing which they will not need to worry about is Sophia, for though she appears to be the self-aware robot of popular imagination, it’s only an illusion. We haven’t invented self-aware robots, at least not yet…

Iain Mitchell, QC, is a member of the Faculty of Advocates and Chair of the Scottish Society for Computers and Law