Tuesday, September 23 2025

The independent voice for the global staffing industry

NEWS

NEWS

AI in hiring out-performs humans in fairness

A report from AI auditing platform Warden AI, The ‘State of AI Bias in Talent Acquisition 2025’, suggests that AI models may reduce, not increase, bias in hiring. This is despite concerns about accountability and fairness continuing to worry business leaders and employees alike. Three-quarters (75 per cent) of HR leaders cite bias as a top concern when evaluating AI tools, second only to data privacy.

The report, which analysed ‘high-risk’ AI systems used in talent acquisition and intelligence platforms, reveals that AI may help organisations make fairer decisions than their human counterparts. On average, most AI models audited were found to deliver outcomes that were fairer and more consistent across the demographic groups tested.

The findings challenge the ongoing public assumption that AI is inherently less fair and counteract academic research that shows a rising incidence of bias in publicly available AI models. Warden AI’s audits evaluate how AI models perform across different demographic groups, such as male and female sexes. They assess both the overall outcomes and the model’s sensitivity to demographic attributes. In the first case, models are tested to see whether different groups receive similar outcomes, with 85 per cent meeting industry fairness thresholds. In the second, models are tested to see whether changing attributes linked to demographics (such as names) affects the result, with 95 per cent meeting high standards of consistency across groups.

- Advertisement -

Not all systems performed equally well: 15 per cent of AI tools failed to meet fairness thresholds for all demographic groups, with performance varying by as much as 40 per cent between vendors, underscoring the importance of carefully selecting responsible vendors to partner with. Nonetheless, the data suggests AI outperforms humans on fairness metrics, with an average fairness score of 0.94, compared to 0.67 for human-led hiring.

Notably, the data suggests female candidates experience up to 39 per cent fairer treatment when AI is involved compared to humans. For racial minority candidates, that figure is even higher at up to 45 per cent.

Still, HR and talent acquisition leaders remain cautious in their AI buying decisions. Only 11 per cent of HR buyers report ignoring AI risk when assessing vendors, while 46 per cent say that a vendor who shows a clear commitment to Responsible AI is a critical driver of success in the procurement process.

“Business leaders and the public rightfully are concerned about AI bias and its impacts,” said Jeffrey Pole, CEO and co-founder of Warden AI. “But this fear is causing us to lose sight of how flawed human decision-making can be and its potential ramifications for equity and equality in the workplace. As our research shows, AI isn’t automatically a better or worse solution for talent acquisition. This is a wake-up call to HR and business leaders: when used responsibly, AI doesn’t just avoid introducing bias, it can actually help counter inequalities that have long existed in the workplace.”

Kyle Lagunas, Founder and Principal at Kyle & Co., added: “After a decade advising HR and Talent leaders on how to adopt technology responsibly, I’ve seen excitement around AI quickly give way to concern, especially around bias and fairness. But now is the time to lean in—and find real answers to the real risks we face.

“This report brings a number of interesting points together to crystallise this critical conversation. As the findings highlight, while AI bias is real, it is also measurable, manageable, and, thankfully, mitigatable.”

Read the full report here.

- Advertisement -
Newsdesk
Newsdesk
The Global Recruiter Newsdesk bringing you balanced journalism, accuracy, news and features for all involved in the business of recruitment from around the world

Related Articles >

- Advertisement -
- Advertisement -
- Advertisement -