Artificial Intelligence is now prominent in many industries, but it is clear that AI in recruitment is a work in progress and encounters some of the same issues as traditional recruiting methods.
Is AI in recruitment the answer to removing bias?
Despite guidelines and practices designed to identify and neutralise discrimination and legislation supporting them, there remains substantial bias in traditional recruitment methodology. The interaction between recruiter, candidate, and client is coloured by the attitudes and perceptions of each, and is inevitably associated with conscious or unconscious prejudice.
In one study, researchers from the University of Oxford submitted over 3000 fictional applications for both manual and non-manual jobs advertised on a popular recruitment platform. Identical CVs and cover letters were used for white British and British-born ethnic minority applicants. The research revealed “Ethnic minorities had to submit 80 per cent more applications to get a positive response.”
On average, 24 per cent of applicants of white British origin received a positive response from employers, compared with 15 per cent of minority ethnic applicants. In another study, researchers created realistic but fictitious CVs, and submitted them for a total of over 13,000 relatively low skilled positions in geographically diverse locations. The applications were grouped in three age categories. Callback rates were lower in the old and lower in women.
What is the future of AI in recruitment?
Artificial Intelligence holds the tantalising prospect of neutralising such bias. Hirevue, one of the largest AI companies in recruiting, made the following statement: “Which should we prefer? The world where hiring is influenced by humans with an unclear definition of job success asking inconsistent questions, evaluating on unknown criteria, or a data driven method that is fairer, consistent, improvable, and inclusive?”
Recent experience, however, does not support the idea that AI in recruitment is necessarily fair. Two AI facial recognition systems, one from Microsoft and one from a Chinese tech company, Face ++, were compared by Dr Lauren Rhue at Wake Forest University. Face ++ marked black faces twice as “angry” as white faces, and Microsoft’s system rated black faces three times as “contemptuous” as white. Amazon dropped its AI assessment of CVs in 2018 when it was found that the software discriminated against women.
How might these problems arise? It may be that the attributes perceived as desirable in the applicant contain innate bias, and the software reproduces this. These attributes may reflect the culture of the hiring organisation; Microsoft, Facebook, Google, and Apple all have substantially more male than female employees, and this difference is even more marked in technical roles. The data on which the algorithm is trained will likely introduce the error. For example, the Affectiva dataset of human emotions was based on Superbowl viewers, and is unlikely to be widely representative.
Machine learning cannot of itself detect bias; it is likely that much of it is derived from the quality of data entered. The problem is compounded by the difficulties in explaining an algorithmic outcome.
Be aware of the ethics
AI in recruitment permits very rapid processing of job applications and any flaws in it will affect large numbers of people. No legislation protects the individual against any adverse consequences of AI, and any such legislation risks obsolescence before implementation. Many of the organisations generating AI recruitment software are large and influential, with sufficient legal firepower to meet any challenge to the ethics of their product.
AI is an evolving tool, and future iterations of software and the quality of data which form its substrate are likely to reduce biases identified in current systems. At present, however, concerns remain that AI may be subject to similar flaws encountered in traditional recruiting methods, that these may prove difficult to identify and rectify, and that there is no legal protection for the individuals they affect.