Recruitment AIs

Recruitment AIs May Not be Sexist or Racist But ARE Biased Against Mums Taking Time Out

4 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

Recruitment AIs are in the news again! This time, it’s about applicants with kids. Parents may pay a parent penalty when AI is used to scan resumes. And of course, because it is mostly women who take time off their careers to raise kids, this is being thought of as a “mum penalty”.

This discovery is thanks to boffins at  NYU Tandon School of Engineering. Researchers there claim to have discovered there is bias in mainstream Large Language Models (LLMs) when it comes to screening potential employees. The models tested were ChatGPT,  Bard, and Claude.

The researchers, Siddharth Garg, Associate Professor of Computer Engineering, and PhD candidate Akshaj Kumar Veldanda, will present their findings today in an industry-related event.

Their conclusion is that “Maternity-related employment gaps may cause job candidates to be unfairly screened out of positions for which they are otherwise qualified.”

Recruitment AIs and bias

Joanne Martin is the owner of Home Harmony International, a bespoke luxury recruitment agency based on the Gold Coast that places household staff in private residences across Australia. 

Martin feels this kind of bias would rule out the very people that would be prime candidates in many of the roles she is engaged to fill. 

“This is why I haven’t employed AI yet to screen resumes, although I have considered it. My reluctance is that working in the private industry is all about relationships and trust, and I get to know my clients, and that’s something that AI can’t do,” Martin said.

Martin is worried that in light of this study, an AI might screen out the very experience she says is an advantage in her industry. For example, she says parenting experience is a plus when applying for house manager and nanny roles. 

“Certainly, we would look at somebody that had parenting skills, and being out of the workforce and having a family whether it be male or female, can, in fact, be a plus. In the private household industry, that would be valued. I would be worried that an AI would rule out some of those things which would be helpful to roles like house management, or even chef roles,” she said.

“This is why I haven’t gone down the AI track yet. It’s not only getting the people that can tick all the boxes. It’s actually getting that right fit for the client as well, and that’s a very personal thing.”

Recruitment AIs and Incoming laws

Over in the U.S., The federal government has put out an AI executive order that aims to prevent bias when employers use AIs in their recruitment process.

In New York, the city has even created new laws around potential bias in AI when recruiting. 

Associate Professor Garg said, “Our study unearths some of the very biases that the New York City law intends to prohibit.”

In Australia, we don’t have laws yet around AI recruitment.

using
Recruitment AIS: Interview by bot might not go as planned.

How the research was done

The researchers added some extra information to some resumes, such as the person’s race or gender, whether they took some time off work to take care of their kids, which political party they support, or if they are pregnant. 

The researchers found out that the LLMs did not care about the person’s race or gender when they matched the resumes with the job categories, which is good news. 

But, the LLMs did care about the other things, like whether the person had a gap in their work history because of their kids, or which political party they liked, or whether they were pregnant. These things made the LLMs more likely to flag these resumes as not suitable.

The researchers also found that some LLMs were worse than others at being fair and unbiased. Claude was deemed the worst, and Bard was the best.

Mum penalties

The researchers said having a gap in your work history because of your kids, which is more common for women, is something that has not been studied much before, and it can make it harder for women to get a job if employers use LLMs to screen their resumes.

Associate Professor Garg commented, “Employment gaps for parental responsibility, frequently exercised by mothers of young children, are an understudied area of potential hiring bias. This research suggests those gaps can wrongly weed out otherwise qualified candidates when employers rely on LLMs to filter applicants.”

The good news, though, is that these AIs can be trained to withstand such bias.

Associate Professor Garg said, “This study overall tells us that we must continue to interrogate the soundness of using LLMs in employment, ensuring we ask LLMs to prove to us they are unbiased, not the other way around. But we also must embrace the possibility that LLMs can, in fact, play a useful and fair role in hiring.”