The Fast Pace of AI Adoption
As organizations increase their use of AI tools in the workplace, they can become vulnerable to new risks and modes of discrimination against people with disabilities and other protected classes. Like other innovative technologies, the speed of AI advancement has outpaced the community’s ability to put in place clear guidelines, standards, and regulatory structures on equitable technology design and implementations. Until recently, computer scientists have not been asked to consider the ethics of their research. Ethical consideration is crucial in many fields, including biology, psychology, and anthropology, where researchers are asked to follow many legal regulations and codes of conduct.
As a result, AI tools designed for the workplace are already on the market without having met independent testing criteria—despite clear risks that researchers have identified in recent years related to privacy, bias, and ethics. While individual consumers may be comfortable taking on these concerns as early adopters, the risks for a business or government entity exist at a much larger scale. If these tools are not designed using the principles of Equitable AI, they can lead to discriminatory practices against people with disabilities and other protected classes.
Employers should proceed cautiously and aim to reduce these risks by adopting practices outlined in our Equitable AI Playbook to help ensure AI implementations produce equitable outcomes.