From Texas Standard:
Impressing a potential employer is becoming harder than usual thanks to software that evaluates candidates' facial expression, tone of voice and other physical characteristics during video interviews.
The technology uses artificial intelligence algorithms that are designed to detect how expressions, language and tone of voice match models of “existing, successful employees in the workforce,” says Alexandra Givens, executive director of Georgetown Law School’s Institute for Technology Law and Policy. She says these algorithms then compare those models to what it measures in an interviewee in deciding whether to hire that person.
Givens recently wrote about the technology for Slate.
She says the technology is a recipe for bias because it doesn’t accurately capture those living with disabilities. Their appearance, voice or ability to make eye contact may not match what the software expects. That’s a problem because people with disabilities already face disadvantages in the workforce and in hiring.
People with disabilities are particularly underrepresented, Givens says, and the algorithms can make that problem worse.
“The idea that you can train a model to accommodate all these different forms of disability is really hard, not least because we don’t have that training data, and often people don’t want to disclose their disabilities to help shape and train tools like that,” she says.
The companies that develop hiring algorithm are working to address the bias and train the artificial intelligence to recognize a wider variety of faces and expressions. But Givens says it’s unclear whether using the technology for hiring is a fair practice at all.
“We need to have a thoughtful conversation with disabled people at the table, actually testing what these tools are looking at and how they’re being used,” Givens says. “Right now, that’s getting glossed over by many of the vendors.”
Written by Samantha Carrizal.