white woman robot concept

Disabilities are highly diverse and virtually impossible to analyze at scale

Consider that half of disabilities are invisible, and only 39 percent of employees with disabilities disclose their disability to their managers, let alone an interviewer. Disabilities are also highly diverse, ranging from physical disabilities like mobility or blindness to cognitive and psychosocial disabilities. They are further diversified when considering intersectional lenses such as race, gender identity, and class. As a result, employers can’t use statistical auditing effectively to know if they are discriminating against people with disabilities.

The data available on employees is usually flawed

People with disabilities tend to fall on the margins of datasets, in small numbers that don’t appear to be statistically relevant to a computer. Researchers often ignore the data related to people with disabilities entirely in the final findings. Due to organizations engaging in discriminatory hiring practices over time, there is also limited employee data that includes information on hiring people with disabilities. What’s more, datasets used in talent acquisition tend to be quite small when compared to the standards set by data scientists to achieve statistically significant results. Researchers don’t yet have a solution for creating inclusive datasets and methodologies that understand and serve diversity.

AI takes shortcuts whenever possible

AI looks for easy correlations in data and can quickly become more biased than its training data. Once it finds correlations that it thinks match the parameters of the task, it will keep building assumptions based on that “fact.”

  • As an example, an AI resume-scanning tool may prioritize male candidates by choosing gendered names and extracurricular activities that matched successful past hires.
  • Another group of researchers was crestfallen when they discovered their AI had learned to identify rulers, rather than diagnose tumors. When they analyzed the data they had used to train the AI, it turned out that most of the images of cancerous specimens had included rulers beside them for scale.
  • When not trained in an inclusive way, AI used to analyze facial expressions in video interview tools have also been ineffective with certain faces or skin colors, and even when human subjects wear glasses, accessories, or head coverings.

Because many vendors consider the inner workings of their technology to be proprietary, it can be impossible to know how an AI is making decisions. Many of the shortcuts it sees may turn out to be incorrect.

Other workplace AI tools, including those that focus on performance monitoring and benefits distribution, require similar scrutiny to mitigate risks related to bias, discrimination, privacy, and ethics.

Continue to How Good Candidates Get Screened Out