Artificial Intelligence (AI)
The use of artificial intelligence (AI) and automated technologies is changing workplaces. Although data analytics and automation are not new, AI technology has advanced rapidly in recent years alongside innovations in algorithms, data volume, and computing power. AI-powered platforms are now used to screen job applicants, streamline the application process, and provide on-the-job training. AI is also powering exciting innovations in assistive technology for people with disabilities. While AI holds tremendous potential for both employers and employees to make workplaces more inclusive, it also carries risks for people with disabilities related to privacy, ethics, and bias.
What is Artificial Intelligence?
Artificial intelligence refers to the use of computer systems to perform tasks that traditionally require human intelligence and senses. AI “learns” through the use of statistical techniques that allow it to incrementally improve performance on a task. This process of “machine learning” allows the machine to generate rules and predictions on its own by analyzing large quantities of raw data, rather than being explicitly programed.
AI can process lots of information by enabling advanced data analysis and pattern-finding in large, complex data sets. It is also used to automate low-level, repetitive tasks and make complex tasks more efficient.
How Voice & Conversational A.I. are Helping Workers With Disabilities
PEAT’s Bill Curtis Davidson, John Robinson of Our Ability, and Guy Tonye of Zammo discuss the potential of AI to support success for people with disabilities in the workplace — and how industry can leverage the creativity, skills, and experiences of PWDs to design more usable future workplace tech.
Original air date: October 30, 2020
Making AI Inclusive for Hiring and HR
This session from the 2020 “A Future Date” conference explores the risks of bias and discrimination of using AI-enabled hiring technologies, and potential strategies to create technology that understands, recognizes, and serves diversity.
Speakers: Corinne Weible, PEAT, Jutta Treviranus, Inclusive Design Research Centre, and Nathan Cunningham, Office of Disability Employment Policy (ODEP)
The PEAT team presented at some exciting events during the month of March. If you attended and want a refresher or could not make it and would like to read what we shared, the slides are linked below. […]
Jeffrey Brown, Diversity and Inclusion Research Fellow at the Partnership on AI, discusses strategies for using AI-enabled recruiting technologies in ways that enhance diversity, equity, inclusion and accessibility (DEIA) priorities. […]
October is National Disability Employment Awareness Month (NDEAM). If you are a worker with a disability or an employer of someone who has a disability, you likely already know the unique considerations and often under-recognized benefits that stem from employing people with disabilities. According to Accenture, companies who hire people with disabilities earn 28% higher revenue, two times the net income, and 30% higher economic profit margins than their peers. […]
Start with a Model The Equitable AI Playbook encourages organizations to consider a hub-and-spoke model for their Equitable AI initiative. In a Hub-and-spoke model, a central group (“Hub”) is led by C-Level and establishes standards, processes, and policies. “Spokes” are business unit or function teams that oversee execution of the policies and processes by implementation teams. This type of model has typically worked well for implementing other organization-wide priorities (e.g., privacy, security, and accessibility) and can enable collaboration among the following organizational components and roles. The existence and specific names of these components and roles [...]
John Robinson, President and CEO of Our Ability, discusses how harnessing the power of AI technology can improve employment opportunities for people with disabilities. […]
This resource library will help guide you through specific aspects of implementing and incorporating AI-enabled tools within your workplace. Each section contains an overview of the challenge or activity as well as links to guidance, tips, and templates to assist you as you move forward with your equitable AI initiative.
AI-enabled technologies are adding new ways to make a workplace more directly accessible to people with disabilities. The following tools are increasingly available as organization-wide subscriptions. They offer a wonderful opportunity to reduce the need for individual accommodation by making the workplace more directly accessible. It’s a best practice to offer such features to all employees, who can turn the technology on and off as needed. And while they offer many benefits, as with any AI-enabled technology these technologies must also be thoroughly evaluated for equitable use to avoid harms. Smart [...]
HR professionals using AI-enabled tools must ensure their assessment methodologies are ethical and accurately measure potential in candidates. Even after a technology has been vetted and procured, the way it’s used can affect how inclusive it is in practice. To ensure a level playing field, everyone involved in the recruiting process should understand and follow these ongoing actions. Ensure the Team Understands the Process Algorithms from a third-party vendor use data collected from multiple sources, and likely haven’t been built with your organization’s own talent pool data. If they do use your [...]
Many vendors provide “fairness audits” that claim to be able to measure when bias is taking place and recommend course corrections. In practice, many of these tools have only considered race and gender as dimensions of diversity. And if they are considering disability, the methodology behind it is likely flawed. The 80/20 rule In 1978, the federal government adopted the Uniform Guidelines for Employee Selection Procedures to determine what constitutes a discriminatory employment test or personnel decision. It states “a selection rate for any race, sex, or ethnic group which is less than four-fifths (or [...]
Staff training is an essential component of your Equitable AI professional development program. Like other elements of staff training on disability inclusion, putting these structures in place helps ensure all employees understand their organization’s vision, policies, initiative structure, and resources for implementing equitable AI. Developing a successful culture of inclusion requires that everyone an organization gain a basic understanding of these issues, and people in more technical roles will need specialized training. Some example training modules could include: Considering human diversity in AI: How to consider human diversity and equitable outcomes in AI not only [...]
The Equitable AI Playbook is a blueprint that can help your organization foster inclusion as you procure, develop or implement artificial intelligence (AI) technologies in your workplace. Organizations are increasingly using AI to screen job candidates, streamline the application process, monitor employee actions, and provide employee training. However, AI technologies can often be unintentionally biased and produce unfair outcomes for different protected classes. This can increase the risk of bias and discrimination against job candidates and employees.
Below are examples of how AI tools can support job seekers and employees with disabilities, recruiters, and staff responsible for diversity, equity, and inclusion. Each example describes the value of applying Equitable AI Principles in the design and implementation of the AI Tool. Considerations that Apply to All Examples Before procuring and implementing AI tools, organizations should examine how they were designed, how they function, and how they support Equitable AI Principles. Some common considerations that apply to all examples include: Were people with disabilities involved in the design and development of the AI tool? Are [...]
Adopting Equitable AI Principles can make your organization a better and more inclusive place to work. It’s a strategic opportunity for an employer to be seen as a leader and innovator in its field. We created the following guidelines to help organizations consider what can be included in your business case for Equitable AI. After reading these guidelines, refer to Play 1: Build an Equitable AI business case in our Equitable AI Playbook to learn valuable tips for building your own business case.
The development of AI technologies is outpacing the evolution of regulations and standards that directly enforce the use of AI hiring tools. However, employers should proactively consider legal and equity concerns related to AI hiring tools before implementing these technologies in their organizations. Acting in accordance with existing guidance can put your organization on the right path as new standards unfold over time. This page provides brief highlights from legislation governing nondiscrimination in employment, which could have implications for the use of AI hiring tools. However, this is not a substitute for official federal guidance. [...]
In 2020, the Leadership Conference on Civil & Human Rights and other advocacy groups released Civil Rights Principles for Hiring Assessment Technologies. The Center for Democracy and Technology summarized these principles, with emphasis on the elevated risks based on disability, in the report Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? They recommend the following framework for employers to consider to prevent or reduce the inequitable impact of algorithm-driven hiring tools. The following resource has been republished with the permission of the author, the Center for Democracy & Technology. Nondiscrimination Employers and [...]
Many AI tools assessing “personality” and “cultural fit” make big claims that they provide accurate identification of personality traits such as openness, conscientiousness, extroversion, emotional stability, adaptability, assertiveness, responsiveness, intensity, optimism, sociability, and grit. However, these assessments aren’t actually based on scientific methods. It is often unclear what these tests assess—and there is significant evidence that they work particularly poorly for underrepresented groups like people with disabilities. Do AI-enabled personality tests work? Personality tests—whether AI-enabled or not—use indirect measures to predict the success of an employee by attempting to identify personality traits in candidates and [...]
AI reflects the implicit biases of the people that design it. Models learning from biased training data may perpetuate historical bias against marginalized groups, such as people whose gender is non-binary, people of color, people with disabilities, or other minorities. Further, training data typically underrepresents marginalized groups. Because these groups include people with disabilities, mitigating bias against people with disabilities is a more complex problem than controlling for factors such as gender or race. Harvard researchers have found that bias exists even in algorithms developed with fairness in mind. How AI ranks candidates AI tools [...]
Disabilities are highly diverse and virtually impossible to analyze at scale Consider that half of disabilities are invisible, and only 39 percent of employees with disabilities disclose their disability to their managers, let alone an interviewer. Disabilities are also highly diverse, ranging from physical disabilities like mobility or blindness to cognitive and psychosocial disabilities. They are further diversified when considering intersectional lenses such as race, gender identity, and class. As a result, employers can’t use statistical auditing effectively to know if they are discriminating against people with disabilities. The data available on employees is usually [...]
As organizations increase their use of AI tools in the workplace, they can become vulnerable to new risks and modes of discrimination against people with disabilities and other protected classes. Like other innovative technologies, the speed of AI advancement has outpaced the community’s ability to put in place clear guidelines, standards, and regulatory structures on equitable technology design and implementations. Until recently, computer scientists have not been asked to consider the ethics of their research. Ethical consideration is crucial in many fields, including biology, psychology, and anthropology, where researchers are asked to follow many legal regulations and codes of conduct.
Due to technological innovations, the landscape of employment looks very different than it did even a decade ago. An interview might be scheduled—or an application rejected—before a human ever reads the candidate’s resume, and a computer may decide when an employee is due for a pay raise. Advances in technology are also creating new possibilities to boost accessibility and accommodation options at work for people with disabilities. The future promises to continue expanding these exciting trends, but implementation requires deliberation. It’s essential that companies ensure the tools they use support their current goals to recruit [...]
What is AI? AI refers to the use of computer systems to perform tasks that traditionally require human intelligence and senses. Instead of a programmer assigning AI a set of step-by-step instructions, AI “learns” by using statistical techniques that allow it to improve performance on a task. This process of machine learning equips AI to generate rules and predictions on its own by analyzing large quantities of data. How Does AI work? Most AI technologies currently in use are considered “Artificial Narrow Intelligence” because they can only perform a specific task or a sequence of [...]
Artificial Intelligence (AI) is changing how organizations do everything from hiring to employee training and performance reviews. It has the power to streamline processes, analyze large volumes of information, and enhance human capabilities. However, there are serious concerns that workplace technologies enabled by AI could lead to unfair outcomes and increase employer discrimination against job seekers and existing employees, including those with disabilities.
Artificial Intelligence (AI) has advanced rapidly in recent years, moving from Research & Development (R&D) labs and startups into broader use. Organizations are using AI to screen job candidates, streamline the application process, monitor employee actions, and provide employee training. When not designed and implemented to consider diverse users, AI technologies can increase the risk of workplace discrimination, including for people with disabilities.
Merve Hickok, Founder of AIethicist.org, discusses the risks employers carry when they use AI-enabled technology in HR and the questions they should ask vendors to avoid bias and discrimination. […]
Charlotte Dales, CEO of Inclusively, discusses the potential of algorithm-based networking tools to transform how employers reach candidates with disabilities, mental health conditions, and chronic illness. […]
Learn how your organization can use artificial intelligence (AI) in equitable ways and how to reduce bias in the technologies you use.
Follow the 10 steps in this Playbook to foster inclusion as your organization procures, develops or implements artificial intelligence (AI) technologies.