Ishbel MacPherson and Linzi Penman: Computer says no '“ does AI reject good candidates?
Let’s start with the basics: what is artificial intelligence or AI? People use the term AI to mean different things. At its most basic, AI can be automation of a simple decision tree or set of rules and instructions to deliver a pre-defined outcome.
More sophisticated AI with machine learning capabilities has the ability to learn from the data to identify patterns or key indicators, and then apply the lessons it has learned to achieve a particular goal.
How does this work in the context of recruitment? High volumes of historical data about behaviour, credentials and personality types are given as inputs – for example, CVs, job specs, test results, and which candidates were hired. Rules are set to define desired attributes.
The computer applies algorithms to this large-scale collection of data and learns patterns or indicators of previously successful candidates. New candidates’ data is then fed to the AI. The AI uses the lessons learned to interpret the profile of an individual and predict ideal candidates. The computer learns to pick the good eggs from the bad in seconds.
But how can you guarantee that the computer is actually picking the good eggs? One risk is that the dataset used to train AI may include biases that reflect historical hiring decisions.
This can lead to unexpected results. For example, some AI recruitment tools have ended up favouring male applicants. Fed CVs that were predominantly from men due to historical hiring patterns, these tools effectively taught themselves that male candidates were preferable and then began to disregard applications that the tool could identify as being from women (e.g. a reference to a women’s sports team or a female name).
Other AI recruitment tools have prioritised applications that used language more commonly found in male CVs.
A similar effect has been reported in studies relating to the hiring of minority candidates. Even when name and gender attributes have been stripped out, AI tools have extrapolated from previous recruitment decisions to favour a similar candidate pool.
This particularly occurs where the AI recruitment tool is fed input data focusing on background and qualifications (such as private education and specific universities) as opposed to talent characteristics (such as skills, teamwork, leadership or work ethic).
The lesson is that an AI can be very effective at making connections and using these to identify characteristics which humans believe have been removed from the data. Paradoxically, far from being neutral, your AI recruitment tool can exponentially increase the bias in recruitment.
So is AI in recruitment a bad idea? No. There are simple automated AI tools which can reduce the time spent going through CVs without these drastic consequences. With more simple AI tools you have control over the rules and goals you set.
If you do want to make use of the capabilities of machine learning AI recruitment tools, speak to the suppliers and ask them: How has their AI been trained? What datasets have been used? What steps have been taken to ensure that these datasets do not contain bias? What tests has the supplier carried out of results generated by the AI to assess whether these are inadvertently biased? What human checks and balances do they include, or recommend that you include, to monitor results being generated by the AI?
AI offers great potential for reducing the time and effort required for recruitment but proceed with care!
Ishbel MacPherson and Linzi Penman are senior members of DLA Piper’s intellectual property and technology practice, based in Scotland.