Staffing for Equitable AI: Roles & Responsibilities

Staffing for Equitable AI: Roles & Responsibilities Start with a Model The Equitable AI Playbook encourages organizations to consider a hub-and-spoke model for their Equitable AI initiative. In a Hub-and-spoke model, a central group (“Hub”) is led by C-Level and establishes standards, processes, and policies.  “Spokes” are business unit or function teams that oversee execution [...]

2021-11-01T19:48:29+00:00Last updated: November 1, 2021|Tags: , , |

Implementing AI-Enabled Assistant Tools to Make Workplaces More Inclusive

Implementing AI-Enabled Assistant Tools to Make Workplaces More Inclusive AI-enabled technologies are adding new ways to make a workplace more directly accessible to people with disabilities. The following tools are increasingly available as organization-wide subscriptions. They offer a wonderful opportunity to reduce the need for individual accommodation by making the workplace more directly accessible. It’s [...]

2021-11-01T19:55:10+00:00Last updated: November 1, 2021|Tags: , , |

Play 5: Create a formal policy

Play 5: Create a formal policy In order to bring your Equitable AI vision to life and create an enforceable, internal mandate for change, your organization should create and publish a formal Equitable AI Policy.  The policy should be communicated internally, and organizations could consider publishing an appropriate version externally. The policy will provide a [...]

2021-11-01T19:44:54+00:00Last updated: November 1, 2021|Tags: |

The Equitable AI Playbook

The Equitable AI Playbook is a blueprint that can help your organization foster inclusion as you procure, develop or implement artificial intelligence (AI) technologies in your workplace. Organizations are increasingly using AI to screen job candidates, streamline the application process, monitor employee actions, and provide employee training. However, AI technologies can often be unintentionally biased and produce unfair outcomes for different protected classes. This can increase the risk of bias and discrimination against job candidates and employees.

2021-11-01T19:43:33+00:00Last updated: November 1, 2021|Tags: , , |

Civil Rights Principles for Hiring Assessment Technologies

Civil Rights Principles for Hiring Assessment Technologies In 2020, the Leadership Conference on Civil & Human Rights and other advocacy groups released Civil Rights Principles for Hiring Assessment Technologies. The Center for Democracy and Technology summarized these principles, with emphasis on the elevated risks based on disability, in the report Algorithm-driven Hiring Tools: Innovative Recruitment [...]

2021-11-01T19:39:15+00:00Last updated: November 1, 2021|Tags: , , |

How Good Candidates Get Screened Out

How Good Candidates Get Screened Out AI reflects the implicit biases of the people that design it. Models learning from biased training data may perpetuate historical bias against marginalized groups, such as people whose gender is non-binary, people of color, people with disabilities, or other minorities. Further, training data typically underrepresents marginalized groups. Because these [...]

2021-11-01T19:36:33+00:00Last updated: November 1, 2021|Tags: , , |