The global adoption of AI technologies is outrunning the ability, or in some cases the will, to use AI responsibly and ethically. Worldwide spending on AI is expected to hit $110 billion in 2024. As AI has grown in use, it has come under fire across sectors and industries for inherent algorithmic bias. The issue of bias in AI is so pervasive that the Partnership on AI, a group dedicated to advancing the understanding of AI, launched an incident tracker in 2020 to chronicle and learn from incidents of AI bias.
Ethical issues, bias, inclusivity, privacy, accountability, and transparency are serious problems that have been recognized since at least 2009, yet problems persist. Root causes include the following:
- Lack of diversity on AI development teams
- Lack of processes to evaluate technology before it’s launched into society
- Lack of awareness, regulations, standards, and incentives to make ethical decisions
The private sector, academia, government, and professional associations have proposed hundreds of AI ethics frameworks, but many of these can be challenging to implement in practice. The tips below, can help your organization adopt Equitable AI principles, establish a governance structure and use practices that have been effective in other organizations.
Tips to Get Started
- Adopt AI principles that emphasize inclusion and equitable outcomes. Consider Diversity, Equity, Inclusion, and Accessibility (DEIA) when adopting AI principles in your organization. The following “Equitable AI Principles” are fundamental principles are aligned to various AI ethics frameworks that can help your organization mitigate discrimination and bias, and improve equitable outcomes when developing or implementing AI technologies:
- Inclusive Empowerment. AI-enabled workplace technologies should engage and empower everyone, including people with disabilities
- Equitable Design. AI-enabled workplace technologies should treat all people fairly, be designed by diverse teams, offer accessible user interfaces, and have equitable outcomes.
- Data Privacy. AI-enabled workplace technologies should respect privacy and enable individuals to access, control and securely share their personal data.
- Reliability. AI-enabled workplace technologies should support human oversight and have explicit, well-defined uses, and be measurable in terms of effectiveness for those uses.
- Transparency. Organizations should be transparent about when and how they are using AI, and provide documentation of data sources, design, privacy, security, and reliability.
- Accountability. Organizations should be accountable for how they build and use AI-enabled workplace technologies, seeking feedback, addressing unintended uses, and following organizational governance processes.
- Define where the buck stops. Once you’ve determined a model, you must decide who will have overall responsibility for your initiative. Ultimately, your CEO and Board of Directors should set an organization-level commitment to equity or a Code of Ethics that will guide the implementation and use of AI technologies. However, consider putting in place a C-level executive (e.g., Chief Responsible AI Officer, AI Ethics Officer) to provide organization-wide leadership of your initiative. This leader should have voting power on your Board of Directors.
Microsoft Responsible AI
Microsoft has a Chief Responsible AI Officer, utilizes a hub-and-spoke model, and has three key building blocks for the hub of its Responsible AI operation. The first component, their Office of Responsible AI (ORA), sets policies and governance processes. The second component is the Aether Committee, comprised of in-house cross-functional experts, that makes recommendations to senior leadership on responsible AI issues, technologies, processes, and best practices. The third component, RAISE (Responsible AI Strategy in Engineering), is an initiative and engineering team built to enable the implementation of Microsoft Responsible AI rules and processes across its engineering groups.
- Establish a Hub to lead your initiative. Regardless of the size of your organization, you should establish a staffed “Hub” that can provide leadership, guidance, and coordination for your initiative. Budget a generous amount of time for this process, and include cross-functional staff from your IT, Legal, Compliance, HR, and Product Development departments, and/or other areas defined in your business case. Your Hub should be led by your organization-wide leader, be visible across the organization, and perform the following activities:
- Assess common industry challenges, technologies, processes and best practices.
- Draw on both internal expertise as well as the broader community of AI researchers to translate research into actionable components of your initiative.
- Regularly review and update the AI’s business case to ensure executive support.
- Develop both internal and external policies that will communicate the organization’s commitment to Equitable AI.
- Advise and make recommendations to senior leadership on Equitable AI issues, technologies, processes, and best practices, calling special attention to sensitive use cases.
- Serve as a centralized governance mechanism to define, implement, and maintain Equitable AI policies and practices across the organization.
- Monitor all uses of AI technology across the organization, balancing benefits and risks, and identifying sensitive use cases.
- Engage employee resource groups to help co-design the AI implementation process, review challenges, and ensure equitable use of AI to mitigate bias related to factors such as race, gender, class, age, and disability.
- Involve your Diversity, Equity, Inclusion and Accessibility (DEIA) teams. There is a growing recognition of the need for more diversity, equity, and inclusion in the development and implementation of AI technologies. One way you can engage diverse teams in your AI implementation is to involve your DEIA staff and members of your Hub. They can help with building more diverse teams, defining requirements, performing audits, and assessing the fairness of outcomes of AI.
Fostering a Culture of Inclusion
Hiring and retaining employees with disabilities and engaging them in efforts like Equitable AI, can help build an organization-wide culture of disability and inclusion. While more than 60% of leaders believe their companies are supportive of people with disabilities, Accenture found that only 20% of employees with a disability feel the organization is fully committed to this support.
- Get help from Employee Resource Groups (ERGs). Your organization may foster affinity groups that each focus on different aspects of human diversity. You can employ members of your ERGs to look at the impact of AI workplace technologies on people of varying ages, gender identities, racial and ethnic backgrounds, sexual orientations, and people with sensory, physical, mental health, and cognitive disabilities. Don’t have an ERG for employees with disabilities and allies? Start a new one. Disability ERGs can help define benefits of AI technologies, assess risks related to bias and discrimination, help determine if user interfaces are accessible, and provide input to training, communication, and accommodation practices for people with disabilities.
Tips for Ongoing Operations
- Have you researched or adopted any AI Ethics Principles inside your organization?
- Do you have any existing IT or business initiatives that employ a hub-and-spoke model, such as your initiatives related to IT security, privacy and accessibility programs?
- Do you already have an AI leader in your organization? What level is that role, and what relationship does that role have to senior leadership?
- What DEIA efforts does your organization already have in place?
- Does your organization have an ERG for employees with disabilities and their allies, or could your organization support the creation of this ERG?