From the perspective of legal complaints, my initial reaction was a fairly confident one of “of course not”. The complaints we see are often about complex legal disputes and seldom are two complaints the same. How could such issues be dealt with through AI. In the words of an under-rated Pet Shop Boys single, ‘it couldn’t happen here’. Could it?
In 2017, McKinsey produced a report which found that 50 per cent of current work activities are technically automatable, and in 60 per cent of current occupations a third of tasks are automatable. That’s quite a sobering thought.
Looking closer to home I found examples of sophisticated voice recognition systems being used to deal with the initial stages of customer complaints, and professional investigators using AI to scan through vast amounts of multi-formatted evidence. I was already beginning to feel a little less confident.
A chat with a developer of complaint-related AI explained to me recent developments where the language and vocabulary used by on-line complainers is analysed to triage and escalate complaints. Developers are also currently looking into the speed with which people are typing, and the force of their key strokes, to indicate the level of dissatisfaction that underlies a complaint.
And it is easy to understand how these developments can be incredibly helpful to complaint handling bodies. They can remove many of the more mundane aspects of the job with speed, thus saving staff time and therefore reducing costs. For relatively simple issues, or where there are multiple similar complaints, AI can also ensure greater consistency in outcomes.
However, because something is possible, does that make it right? Whilst theoretically algorithms can be designed to automate increasingly complex and complicated processes, such as detailed complaints handling, what happens when those algorithms fail? How does the robot identify and correct inevitable in-built bias? How does the robot identify misuse and/or malicious repurposing?
But perhaps more fundamentally, is it what consumers want?
We express dissatisfaction emotively and in an unstructured way which is less adaptable to binary interpretation. It also shouldn’t be underplayed that, when making a complaint, quite often one of the key motivations will simply be the desire to be listened to, and to be understood.
From a complaint handling perspective, in dealing with legal complaints the Scottish Legal Complaints Commission (SLCC) suggests to lawyers that best practice is to engage with the complainer, to recognise the underlying emotions, and above all to empathise with the complainer. The ‘computer says no’ scenario seems a far cry from that approach.
So do I think robots will one day take over the SLCC? I still think not, or at least not entirely. The issues we deal with require human understanding and empathy, and it’s difficult to imagine how AI can meet those needs.
As I’ve found, however, there are aspects of AI which, if properly utilised, can speed up the more mundane parts of complaint handling. Where complaint numbers are rising in many sectors, including the legal sector in Scotland, there is a good argument for exploring some of these options, and it may well be that the imminent report on the review of regulation in the legal sector in Scotland will point in that direction.
David Buchanan-Cook is head of oversight and communications at the Scottish Legal Complaints Commission.
Oversight is the area of the SLCC responsible for overseeing how the professional bodies deal with conduct complaints and for investigating complaints made about the SLCC’s service; monitoring and reporting trends in complaints; and producing guidance and best practice notes on complaint handling.