black woman interacting with digtal symbols

Play 8: Govern and manage risk

Mature organizations have clearly defined corporate governance structures that can control, direct, and regulate the performance of their organization. People follow these standards and norms to effectively manage projects, portfolios, infrastructure, and processes.  When conflicts arise between various stakeholders, the governance structures provide a framework to resolve the conflict.

This Play outlines how to implement the Hub and Equitable AI governance processes you outlined when you established principles, ownership, and a Hub to drive change.  Your actions will cascade from your organization’s overall governance structures, which are ultimately overseen by your Board. As you implement your initiative, the Hub should marshal support from the Advisors you identified during your stakeholder analysis to review plans, assess outcomes, and make recommendations.

power buttonTips to Get Started

  1. Keep senior leaders informed and engaged. Executives with organization-wide responsibilities related to your initiative should keep all other senior leaders—including your CEO and Board—informed of the initiative’s progress. Titles vary, but this is typically your “Chief Responsible AI Officer” or “AI Ethics Officer.” At a smaller organization, this role might be combined with other duties related to information technology, security, or risk. Refer to Play 4: Establish principles and ownership, tip “Define where the buck stops” for more on this role.
  2. Educate leaders on maintaining equity through emerging use cases. Leadership should be engaged in reviewing and making decisions on new efforts, implementations, or issues related to the initiative. Conduct a regular review of AI implementations related to sensitive use cases to ensure equity is not sacrificed in pursuit of other goals.
  3. Track progress with training and professional development. The Hub should monitor progress in delivering overall awareness training to all staff, as well as Implementer-specific training and professional development. By doing so, your organization will keep a pulse on overall organizational knowledge transfer and professional development outcomes.
  4. Analyze specific risks for people with disabilities. Organizations should routinely analyze disability-specific risks related to their AI implementations. These include identifying when AI may exclude or have negative outcomes for people with disabilities, when AI is not accessible or usable, or when AI jeopardizes people with disabilities’ quality of service, safety, security and data privacy. The risk assessment should include a determination of how well various AI functionality works for people with disabilities – for example, computer vision, speech, text processing, and integrated functions like chatbots. Organizations can utilize functional risk classifications outlined by researchers as a starting point.
  5. Establish governance systems that consider your entire supply chain ecosystem. It is crucial to ensure your governance systems can be accessed by employees as well as external organizations in your supply chain. This is especially important for areas where streams of data are flowing into and outside of your organization via AI-enabled systems. In such cases, data integrity and safeguards are critical to avoid any data privacy or security issues, or to avoid amplification of outcomes that are biased or unfair toward different parties.
  6. Proactively manage data and privacy risk. Privacy regulations are evolving to give individuals the right to know and control how their data is stored and used and to request their data be deleted. Organizations should routinely assess their AI technology to understand how the privacy of individuals with disabilities can be protected. Privacy policies should be communicated and readily available for review by end users. Implement process checks related to data and privacy, including an approval step in which your Hub has the authority to delay or halt AI implementations that use data irresponsibly. A recommended practice is to create and maintain a transparent inventory of what data is collected, whether or not it is initially intended to be used in AI implementations.
  7. Utilize checklists and guidelines to support consistent practices related to Equitable AI principles. As you procure and implement AI technologies, it’s important to help teams consistently implement practices aligned to your Equitable AI Principles (Fairness, Reliability & Safety, Privacy & Security, Transparency, Accountability, and Inclusiveness) by creating Process Checklists, lists of features an AI should include and/or requirements it should meet in order to accord with the Equitable AI Principles. Process Checklists can help you formalize ad-hoc processes, prompt discussions that might otherwise not take place, and empower individual implementation teams. They are most effective when aligned with teams’ existing workflows and supported by organizational culture. Some examples include:
    • An AI Fairness Checklist was developed by researchers from Microsoft and Carnegie Mellon University who worked with 48 practitioners to co-design it1.
    • An Inclusivity checklist or guidelines might include practices for inclusive implementation design leveraging your ERGs, co-design practices, and usability testing with diverse users (considering race, gender, class, age, and disability).

spiining gearTips for Ongoing Operations

  • Routinely perform Equitable AI risk audits and seek approval from the Hub. Work with AI technology vendors to perform an Equitable AI risk audit during the procurement process.  Risk audits should outline intended uses for the technology, anticipated benefits, and an analysis of specific risks—including data and privacy risks, as well as risks of discrimination and bias—for people with disabilities and other protected classes. Review the risk audit results with the Hub before proceeding with purchases and implementations. The Hub and Equitable AI senior leader should have the authority to delay or halt AI implementations that have an outsized risk.
  • Define metrics and key performance indicators (KPIs) to measure outcomes relative to the Equitable AI Principles. Establish metrics to measure outcomes across all of your AI implementations. Metrics and KPIs should align to the Equitable AI Principles you adopted.  Organizations like the AI Ethics Impact Group, led by VDE / BertelsmannStiftung offer a practical, principle-based measurement framework2.  Measuring outcomes for adherence to the Equitable AI Principles across numerous AI implementations will help you determine where challenges exist, what is working well, and where to focus resources to improve overall outcomes. The graphic below is a screenshot of the AI Ethics Impact Group interdisciplinary framework to operationalize AI ethics – employing a VCIO (Values, Criteria, Indicators, Observables) approach to measure against principles. In the VCIO approach, values (or principles) like Transparency have measurable criteria (e.g., Disclosure of origin of data sets, Disclosure of properties of algorithm/model used, etc.). These criteria can be measured using indicators. For example, Disclosure of origin of data sets can be assessed by asking: Is the data’s origin documented? Is it plausible for each purpose, which data is being used? Are the training data set’s characteristics documented and disclosed, and are the corresponding data sheets comprehensive?

Screen shot of measures for “Transparency.” AI Ethics Impact Group, led by VDE / BertelsmannStiftung, From Principles to Practice: An interdisciplinary framework to operationalize AI ethics, 2020. Licensed under a CC Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.

pathGuiding Questions

  1. How does your organization currently assess and measure risk? What are the processes and tools used, and what teams typically perform the risk assessment? Are you considering both potential harms to the company, as well as potential harms to underrepresented groups, such as people with disabilities?
  2. Does your organization have an inclusive design or accessibility program for technologies it designs, develops, procures, or implements?
  3. How does your organization track learning and professional development activities for its employees?
  4. Which departments manage data privacy and security?  What is the relationship to departments responsible for AI implementations?

head with arrow


  1. AI Ethics Impact Group, led by VDE / BertelsmannStiftung, From Principles to Practice: An interdisciplinary framework to operationalise AI ethics, 2020. Licensed under a CC Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.
  2. IBM, “AI Fairness for People with Disabilities: Point of View”, Dr. Shari Trewin, 26 Nov 2018, arXiv:1811.10670 [cs.AI]
  3. Reid Blackman, “A practical guide to building ethical AI”, Harvard Business Review, 2020.
  4. IEEE, A Call to Action for Businesses Using AI (PDF) , Ethically Aligned Design (EAD) for Businesses (PDF). Licensed under a CC Attribution-Non-Commercial 4.0 US.
Play 9: Communicate internally and externally