Executive Summary
About the Think Tank
On April 17, 2023, the Partnership on Employment & Accessible Technology (PEAT) hosted a virtual Think Tank on the use of artificial intelligence (AI) tools in hiring. The goal of this Think Tank was to begin creating an Inclusive Hiring Profile (“Profile”) based on the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF). Once completed, this Profile will serve as a policy framework that organizations can use to ensure fairness in their use of AI hiring tools.
The event brought together leaders from the U.S. Department of Labor’s Office of Disability Employment Policy (ODEP), Equal Employment Opportunity Commission (EEOC), NIST, civil rights groups, including disability advocacy organizations, and technology industry associations and companies. Speakers shared their expertise during “lightning” presentations, and attendees participated in breakout groups to brainstorm inclusive hiring policies that fit within four core functions of the AI RMF: Govern, Map, Measure, and Manage.
Key Takeaways
- Involve people with disabilities in technology design and use. As with any technology, AI hiring tools must be built, deployed, and monitored with direct involvement from people with disabilities, including those with intersectional identities. We must ensure people with diverse lived experiences are in the loop at every stage to provide human oversight, while always considering who is not in the room and working to include them.
- Share risk management duties across organizations. We cannot expect one group of vendors, employers, or policymakers to handle all risk management responsibilities on their own. Shifting blame can hamper real change. Responsible groups must understand how AI hiring tools impact disabled people and commit to using them in ethical ways.
- Notify end users about AI tools and hold organizations accountable. We need to make it easier for people interacting with AI tools to understand the risks. Offering plain language notices can help end users learn how AI tools work, speak up about potential harms, and choose whether to opt out. Transparent explanations can hold organizations accountable for minimizing risks in their use of AI tools.
- Staff at all levels need to support a culture of AI fairness. Applying the AI RMF to the hiring context requires cultural shifts across organizations and a commitment to continuous improvement. When they adopt AI hiring tools, organizations cannot simply check a box and consider them inclusive. Instead, leadership and staff need to champion fairness by establishing processes to continually evaluate their use.
- Make progress by balancing risk management and civil rights goals. As we create this Profile, we need to consider relevant civil rights laws and regulations at federal, state, and local levels. An effective Profile will incorporate inclusion across all protected classes, such as disability, age, race/color, national origin, religion, sex, sexual orientation, and gender identity.
Next Steps for Creating the Profile
Throughout the event, the PEAT team informed participants that their contributions to the Think Tank were just the beginning of creating the Profile. More community input will be needed to develop a comprehensive Profile, including from organizations that were not present. To that end, PEAT suggests the following steps to continue building the Profile:
- Analyze stakeholders for the workgroup. Review feedback from the Think Tank and follow up with participants to determine which organizations and individuals want to contribute to an ongoing community-driven effort to build the Profile.
- Outline a process for creating the Profile. Generate a list of participants, select a core team to provide individual input on the Profile, define ownership, and develop a plan to build the Profile with guidance and support from NIST. The plan should include conducting research, drafting versions of the Profile, gathering community feedback, and coordinating stakeholders.
- Establish a collaboration mechanism. Establish a mechanism to share resources, communicate progress, and collaborate on the Profile’s development.
- Draft the Profile. Develop a Profile draft that outlines inclusive employment guidance, suggested actions, transparent practices, and documentation that includes references from legal, regulatory, and community sources, as well as leveraging findings from PEAT’s Think Tank.
- Draft supplementary materials. Develop a glossary based on the initial Profile so that stakeholders have a shared language around key terms such as risk, harm, and bias to ensure adopters work toward a common goal of mitigating risks and minding civil rights. Develop other materials such as presentations and adoption guidance.
- Solicit community input and finalize the Profile. After the initial version of the Profile is developed, it should be evaluated and updated based on feedback from adopters and potential areas for improvement.
Think Tank Details
Speakers
ODEP leadership, EEOC leadership, and NIST experts joined 50+ attendees for this virtual event. Opening remarks included the following:
- Assistant Secretary of Labor for Disability Employment Policy Taryn Williams gave inspiring opening remarks and shared ODEP’s plan to sustain momentum by collaborating with partner agencies within the U.S. Department of Labor to power the creation of a robust Profile.
- NIST Information Technology Laboratory Associate Director Elham Tabassi discussed the development of the AI RMF, the four functions of Govern, Map, Measure, and Manage, as well as the Playbook, which NIST plans to update regularly.
- Chair of the EEOC Charlotte Burrows talked about how the EEOC has made it a top national priority to combat discrimination that creates barriers to recruitment and hiring. She also shared details on a technical assistance document, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, which the EEOC released in 2022.
The Think Tank also included Lightning Talks from 10 experts in four focus areas. View the final section on Participating Organizations for the full names of each group.
- Lightning Group One: Intersectional Advocacy
– Maria Town (AAPD)
– Melis Diken (George Washington University Alumnus)
– Noble Ackerson (Ventera)
– Ridhi Shetty (CDT)
- Lightning Group Two: Ethical AI & Independent Audit of AI Systems
– Cari Miller (Center for Inclusive Change)
– Shea Brown (BABL AI)
- Lightning Group Three: Disability Leadership & Innovation in Hiring Platforms
– Ross Barchacky (Inclusively)
– Jhillika Kumar (Mentra)
- Lightning Group Four: AI Business Practices for Developers & Adopters
– Kathy Baxter (NIST AI Fellow/Salesforce)
– Kush Varshney (Trustworthy ML & AI/IBM)
Breakout Session Key Findings
Think Tank attendees were also divided into four groups for breakout sessions, corresponding with the functions of the NIST AI RMF Playbook. Each discussion was co-moderated by a NIST AI RMF expert and an AI expert and supported by a member of the PEAT team. Below are the highlights from each group.
Govern: A culture of risk management is cultivated and present.
Moderated by Kathy Baxter (NIST/Salesforce) and Cari Miller (Center for Inclusive Change).
Discussion highlights include:
- Need: We must always consider diverse needs and include input from a wide range of perspectives.
- Need: Organizations should be required to treat data in the same way we treat mineral procurement or labor by proving they use them in legal and ethical ways, can remove sensitive data, and can mitigate for bias and harm.
- Challenge: In AI governance, it can be difficult for consumers to know if harm has been done and how to report it without repercussions.
Map: Context is recognized, and risks related to context are identified.
Moderated by Sina Fazelpour (Northeastern University) and Shea Brown (BABL AI).
Discussion highlights include:
- Need: We must consider which stakeholders are missing and question who is not in the room. If a point of view gets overlooked in the Map function, it is difficult for the other functions to catch.
- Need: Organizations need to understand their risk tolerance. It is important to know to what extent they are willing to tolerate certain types of risks, and they must continuously monitor, reevaluate, and revise this tolerance.
- Challenge: We must recognize that it is not enough to focus only on the actual technology; more perspectives are necessary. However, it is difficult to determine how far-reaching the perspectives should go and to define a limit or baseline.
Measure: Identified risks are assessed, analyzed, or tracked.
Moderated by Elham Tabassi (NIST) and Kush Varshney (IBM).
Discussion highlights include:
- Need: We need to consider how organizational or societal values determine what is measured and acknowledge those values can change over time.
- Need: We need to ensure that communication is open and includes a wide range of people to confirm that we are continually reevaluating what should be measured.
- Challenge: A measurement and risk-based approach might be inherently problematic. Especially regarding disability, what will and will not be or can and cannot be measured is a choice in some capacities. Harm can come from measurement itself, like in the case of employee surveillance.
Manage: Risks are prioritized and acted upon based on a projected impact.
Moderated by Reva Schwartz (NIST) and Patrick Hall (George Washington University).
Discussion highlights include:
- Need: We need legal and regulatory clarity on what vendors should disclose about their AI hiring tools.
- Need: Applicants should be able to understand where AI is involved, and marketing materials should accurately declare which accommodations are available. Organizations should be transparent about how they ensure an equitable process for each candidate.
- Challenge: There is a lack of diverse voices. We need to hear people’s voices regarding disability and accessibility instead of relying solely on technology. Bias testing must always include disability and accessibility.
Participating Organizations
As noted, the Think Tank brought together experts and leaders from a wide range of organizations representing stakeholders in the U.S. Federal Government, civil society, disability advocacy, responsible AI, independent audit of AI systems, disability-led technology innovators, startups, employers, consultants, and mainstream hiring platforms, including:
American Association of People with Disabilities (AAPD); American Civil Liberties Union (ACLU); Aspen Institute; Babl.ai; Cadmus Group; Center for Democracy and Technology (CDT); Center for AI & Digital Policy; Center for Inclusive Change, Data & Society; U.S. Equal Employment Opportunity Commission (EEOC); EY; ForHumanity; George Washington University; IBM; Inclusively; Indeed; Independent; INQ Law; Mentra; National Institute of Standards and Technology (NIST); Northeastern University; Ontario College of Art & Design (OCAD) University; Our Ability; Partnership on AI; Partnership on Inclusive Apprenticeship (PIA); Salesforce; Stanford University; University of Maryland; U.S. Department of Labor (DOL); U.S. Department of Labor’s Office of Disability Employment Policy (ODEP); Ventera; We Count; Workday.