In January 2023, the National Institute of Standards and Technology (NIST) published the highly anticipated Artificial Intelligence Risk Management Framework (AI RMF 1.0) and companion NIST AI RMF Playbook. Together, these resources establish voluntary national standards to address risks in the development and use of AI products and services.
Some of these risks relate to the use of AI in the employment context, and if you follow our work here at PEAT, you know we are focused on ensuring that AI creates inclusive workplaces for people with disabilities and bias is reduced or mitigated whenever possible, especially in automated hiring tools.
While developing the AI RMF, NIST called for public input, and PEAT contributed by sharing various resources, including content from our AI & Disability Inclusion Toolkit. Along with many organizations in the Diversity, Equity, Inclusion, and Accessibility (DEIA) space, PEAT advised on how to infuse accessibility and disability inclusion into the Framework. As always, our goal was to promote fairness and equity for all, given that AI can have far-reaching impacts.
Accessibility & Disability Inclusion in the NIST AI RMF
Upon its release, we were excited to see accessibility and disability inclusion strategically integrated into the Framework and Playbook. The core Framework is broken into four functions: Govern, Map, Measure, and Manage. The incorporation of accessibility into Govern is particularly notable because it is an overarching function that directly impacts the other three. This reinforces accessibility as foundational to achieving the Framework’s aims.
Additionally, the Framework’s Appendix includes accessibility under ‘AI Design.’ This shifts accessibility into the design phase, which is earlier than typical, and increases the likelihood it will be addressed across all phases of AI development. This reflects PEAT’s long-standing core principle that accessibility should be considered in the development of all technologies—including AI—from the very start.
PEAT in the NIST AI RMF Playbook
The Playbook directly references several PEAT materials, including:
- GOVERN 1.1– Legal and regulatory requirements involving AI are understood, managed, and documented. (References PEAT’s “AI Hiring Tools and the Law”)
- GOVERN 2.2– The organization’s personnel and partners receive AI risk management training to help them perform their duties and responsibilities consistent with related policies, procedures, and agreements. (References PEAT’s “Developing Staff Trainings for Equitable AI”)
- GOVERN 3.1– Decision making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (i.e., with diverse demographics, disciplines, experience, expertise, and backgrounds). (References PEAT’s “Staffing for Equitable AI: Roles & Responsibilities”)
- MAP 1.3– The organization’s mission and relevant goals for the AI technology are understood and documented. (References PEAT’s “Business Case for Equitable AI”)
- MAP 5.2– Practices and personnel that support regular engagement with relevant AI actors and integrate feedback about positive, negative, and unanticipated impacts are in place and documented. (References PEAT’s “Risks of Bias and Discrimination in AI Hiring Tools”)
With NIST’s dedication to managing the risks of AI, we anticipate a surge of stakeholder support and innovation in this space. As we enter this new and exciting phase, PEAT will continue to pay particular attention to the impact this work has on Automated Employment Decision Tools (AEDTs) to ensure that America’s workers with disabilities and intersectional identities have equitable access to opportunities in employment.