AI reflects the implicit biases of the people that design it. Models learning from biased training data may perpetuate historical bias against marginalized groups, such as people whose gender is non-binary, people of color, people with disabilities, or other minorities. Further, training data typically underrepresents marginalized groups. Because these groups include people with disabilities, mitigating bias against people with disabilities is a more complex problem than controlling for factors such as gender or race. Harvard researchers have found that bias exists even in algorithms developed with fairness in mind.

How AI ranks candidates

AI tools can predict job applicants and candidates who would likely be ideal employees by comparing the data it collects to a model. A predictive modeling tool might work by comparing real candidate responses to theoretical responses from an “ideal” candidate or by comparing answers to historical data from a company’s past successful hires. What’s the biggest problem with this approach? These tools usually collect secondary data as a stand-in for the actual measurement that’s needed—a candidate’s ability to perform a job.

Personality Traits

Some AI tools claim to measure personality traits, including openness, conscientiousness, extroversion, emotional stability, adaptability, assertiveness, responsiveness, intensity, optimism, sociability, grit, and more. These measurements often lack scientific validity—and there is significant evidence that they work particularly poorly for underrepresented groups like people with disabilities.

“Cultural Fit”

An AI tool’s assessment of cultural fit is subjective and is drawn from conclusions about a candidate’s motivations, ideal work environment, or life priorities. But trusting a machine to pick candidates with a similar “cultural fit” to an ideal employee may detract from diversity and hiring goals. In one study an AI model determined that the top predictor of a candidate’s success was whether they had played lacrosse as a student. That may well be true of the historical track record for the organization—but a methodology based around this assumption is unlikely to help recruit diverse candidates.

Outcome Data

Using data from successful hires to predict ideal candidates may seem like an easy solution, but relying on seemingly “objective” data points poses challenges. Data on the school a candidate attended, their referral sources, and length of past employment at previous jobs is not necessarily tied to the candidate’s ability to do the actual job. Consider that a nonwhite, working-class student is more likely to have lacked legacy connections and other means of family support that can boost their chances to attend a top-tier college. Some qualified candidates may have had to take an entry-level job rather than an unpaid internship to pay their bills, even if they would have been the top choice for the job. Many people with lower socioeconomic status, particularly women, may have had to take time away from school, internships, and jobs to assist family members at home. And a rising star with a disability may have left a job due to the frustrations of an inaccessible working environment—something that nearly happened to Microsoft’s Chief Accessibility Officer Jenny Lay Flurrie.

Aptitude Test Inaccuracies

Some AI-enabled hiring tests use a series of virtual games to measure aptitudes and/or cognitive abilities, such as reaction time, attention span, ability to focus under pressure, problem-solving, and vocabulary. For people with disabilities, these gamified tests may not be an accurate way to measure their future success, due to the test format itself or because the technology used to create the test isn’t fully accessible.

Excluded Candidates at Every Stage

AI tools can also exclude candidates at each stage of the talent acquisition process. Consider the following scenarios.

Application Submission

An organization uses an AI-enabled conversational chatbot to interact with potential candidates and answer common questions. Unfortunately, the chatbot is inaccessible to screen reader users. As a result, a talented candidate who is blind isn’t able to effectively interact with the chatbot or navigate past it to submit her resume.

Resume Screening

An organization uses AI to identify and favor candidates who completed a college internship, a trait associated with their successful past hires. A candidate with a disability is excluded because they were unable to find an internship with an accessible physical environment or option to work virtually. Their cover letter detailing equivalent experience through remote volunteer work is never read by a human.

Interview

An organization uses AI-enabled video screening tools to evaluate preliminary interviews, which are conducted virtually without the oversight of a human. The tools automatically analyze facial movement and word choice to evaluate candidates based on their personalities and cultural fit. Because the tools were developed using able-bodied test subjects, they don’t properly understand the top candidate, who is Deaf. As a result, the AI screening tools remove her from the candidate pool before a human interviewer gets a chance to meet her.

Pre-employment Testing

An organization uses gamified pre-employment tests to identify candidates’ levels of “empathy” by matching photographs of ideal employees’ facial expressions that indicate empathy to candidates’ facial expressions. A qualified candidate who is neurodivergent scores poorly on tests in this format and the organization never contacts their glowing references confirming the candidate’s skills as a team player.

Continue to The Problems with Personality Tests