Basics of AI
Artificial Intelligence (AI). An umbrella term that refers to efforts to teach computers to perform complex tasks and behave in ways that give the appearance of human agency. (Salesforce Trailhead)
Algorithms. A process or set of rules that a computer can execute. AI algorithms can learn from data. They can recognize patterns from the data provided to generate rules or guidelines to follow. (Salesforce Trailhead)
Chatbot. An AI system that uses natural language processing techniques to conduct a conversation with a human via audio or text. Examples of chatbots are Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa. Also known as an interactive agent. (PEAT)
Computer Vision. An interdisciplinary field that uses computer science techniques to analyze and understand digital images and videos. Computer vision tasks include object recognition, event detection, motion detection, and object tracking, among others. (Wikipedia)
Data and Datasets. Data are a collection of qualitative and quantitative variables. Data contains the information that is represented numerically and needs to be analyzed. There are three different types of datasets, which are collections of data used in AI systems and in machine learning: Training Dataset, Testing Dataset, and Validation Dataset. (PEAT)
Human-in-the-Loop (HITL). Refers to the process of selective inclusion of human participation in AI systems so as to harness the efficiency of intelligent automation while remaining receptive to human feedback, all while retaining a greater sense of meaning. HITL processes begin with model development and extend across the life cycle of the AI system, from proof of concept through design, development, deployment planning, and production. HITL processes can include humans inspecting, validating, and making changes to algorithms to improve outcomes, as well as collecting, labeling, and conducting quality control on data. (Ge Wang, Stanford Human-Centered AI [HAI])
Machine Learning. Specific techniques that allow a computer to “learn” from existing datasets by using algorithms and statistical models to analyze and draw inferences. Machine learning models can adapt to perform on new datasets without having been explicitly programmed with step-by-step instructions. Machine learning can be divided into three categories: Supervised Machine Learning, Unsupervised Machine Learning, and Reinforcement Learning. (PEAT)
Model (AI). A program that has been trained on a set of data (called the Training Dataset) to recognize certain types of patterns. AI models use various types of algorithms to reason over and learn from this data, with the overarching goal of solving business problems. (Chooch.ai)
Natural Language Processing (NLP). A subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of “understanding” the contents of documents (or conversations), including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. (Wikipedia)
Supervised Machine Learning. A machine learning approach that is defined by its use of labeled datasets. These datasets are designed to train or “supervise” algorithms into classifying data or predicting outcomes accurately. Using labeled inputs and outputs, the model can measure its accuracy and learn over time. (IBM)
Unsupervised Machine Learning. A machine learning approach that uses algorithms to analyze and cluster unlabeled data sets. These algorithms discover hidden patterns in data without the need for human intervention (hence, they are “unsupervised”). Unsupervised ML models are used for three main tasks: clustering (grouping unlabeled data based on similarities or differences), association (finding relationships between variables in data), and dimensionality reduction (reduces the number of data inputs to a manageable size while preserving data integrity). (IBM)
Reinforcement Machine Learning. A type of dynamic programming that trains algorithms using a system of reward and punishment. The algorithm is exposed to a totally random and new dataset, and it automatically finds patterns and relationships inside of that dataset. The system is rewarded when it finds a desired relationship inside of that dataset, but it is also punished when it finds an undesired relation. The algorithm learns from rewards and punishments and updates itself continuously. (AI: A Glossary of Terms – Springer Nature)
Training Dataset. An extremely large dataset that is used to teach an AI model. For supervised machine learning models, the training data is labeled. The data used to train unsupervised machine learning models is not labeled. The training data is an initial set of data used to help a program understand how to apply technologies like neural networks to learn and produce sophisticated results. It may be complemented by subsequent sets of data called validation and testing sets. (Techopedia)
Testing Dataset. A dataset that is used to provide an unbiased evaluation of a final model fit on the Training Dataset, and which provides the gold standard used to evaluate the model. It is only used once a model is completely trained (using the Training and Validation Datasets). The Testing Dataset is generally well curated and contains carefully sampled data that spans the various classes that the model would face, when used in the real world. (Tarang Shah, Towards Data Science)
Validation Dataset. A dataset used to provide an unbiased quality evaluation of a model fit on the Training Dataset while fine-tuning the model’s hyperparameters. The Validation Dataset is also known as the Dev or Development Dataset (since this dataset helps during the “development” stage of the model). (Tarang Shah, Towards Data Science)
Diversity, Equity, Inclusion & Accessibility (DEIA)
Accessibility and Accessible Technology. Accessibility means that everyone can use the exact same technology as anyone else—regardless of whether they can manipulate hardware, how much vision they have, how many colors they can see, how much they can hear, or how they process information. Accessible technology adds layers into computer hardware, operating systems, and applications to enable people with disabilities to have access to the same information as people without disabilities. (PEAT)
Belonging. The feeling of security and support when there is a sense of acceptance, inclusion, and identity for a member of a certain group or place. For people to feel like they belong, the environment (in this case, the workplace) needs to be set up to be a diverse and inclusive place. (SHRM)
Bias. Prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair. (Oxford English Dictionary)
Disability. A physical or mental impairment that substantially limits one or more major life activities of an individual; a record of such an impairment; or being regarded as having such an impairment. (Americans with Disabilities Act of 1990)
Discrimination. The unequal treatment of members of various groups, based on conscious or unconscious prejudice, which favors one group over others on differences of race, gender, economic class, sexual orientation, physical ability, religion, language, age, national identity, religion and other categories. (University of Washington)
Diversity. Diversity includes all the ways in which people differ, and it encompasses all the different characteristics that make one individual or group different from another. It is all-inclusive and recognizes everyone and every group as part of the diversity that should be valued. A broad definition includes not only race, ethnicity, and gender—the groups that most often come to mind when the term “diversity” is used—but also age, national origin, religion, disability, sexual orientation, socioeconomic status, education, marital status, language, and physical appearance. It also involves different ideas, perspectives, and values. (Racial Equity Tools)
Equality. Evenly distributed access to resources and opportunity necessary for a safe and healthy life; uniform distribution of access that may or may not result in equitable outcomes. (University of Houston)
Equity. The fair treatment, access, opportunity, and advancement for all people, while at the same time striving to identify and eliminate barriers that prevent the full participation of some groups. The principle of equity acknowledges that there are historically underserved and underrepresented populations and that fairness regarding these unbalanced conditions is necessary to provide equal opportunities to all groups. (University of Washington)
Explicit Bias (or Conscious Bias). Refers to the attitudes and beliefs people hold about a person or group on a conscious level. Much of the time, these biases and their expression arise as the direct result of a perceived threat. When people feel threatened, they are more likely to draw group boundaries to distinguish themselves from others. Conscious bias in its extreme is characterized by overt negative behavior that can be expressed through physical and verbal harassment or through more subtle means such as exclusion. (Perception Institute and National Center for Cultural Competence @ Georgetown University)
Fairness. Just and reasonable treatment in accordance with accepted rules or principles. Treating all people equally and applying reasonable punishments only when rules are broken is an example of fairness. (Your Dictionary)
Implicit Bias. The process of associating stereotypes or attitudes towards categories of people without conscious awareness – which can result in actions and decisions that are at odds with one’s conscious beliefs about fairness and equality. Also referred to as unconscious bias. (National Equity Project)
Inclusion. The act of creating an environment in which any individual or group will be welcomed, respected, supported and valued as a fully participating member. An inclusive and welcoming climate embraces and respects differences. (University of Washington)
Intersectionality. the complex, cumulative way in which the effects of multiple forms of discrimination (such as racism, sexism, and classism) combine, overlap, or intersect especially in the experiences of marginalized individuals or groups. Kimberlé Crenshaw introduced the theory of intersectionality, the idea that when it comes to thinking about how inequalities persist, categories like gender, race, and class are best understood as overlapping and mutually constitutive rather than isolated and distinct. (Merriam-Webster Dictionary)
Invisible Disability (Hidden Disability). An “invisible,” “non-visible,” “hidden,” “non-apparent,” or “unseen” disability is any physical, mental, or emotional impairment that goes largely unnoticed. An invisible disability can include but is not limited to: cognitive impairment and brain injury; the autism spectrum; chronic illnesses like multiple sclerosis, chronic fatigue, chronic pain, and fibromyalgia; d/Deaf and/or hard of hearing; blindness and/or low vision; anxiety, depression, PTSD, and many more. As the body is always changing, disability and chronic illness may be unstable or periodic throughout one’s life. (Invisible Disability Project)
Marginalization. The process of making a group or class of people less important or relegated to a secondary position, (e.g., when one class of people is grouped together as second class citizens). Marginalization at the individual level results in an individual’s exclusion from meaningful participation in society. (Saybrook University)
Reasonable Accommodation. A reasonable accommodation is any change in the work environment (or in the way things are usually done) to help a person with a disability apply for a job, perform the duties of a job, or enjoy the benefits and privileges of employment. Reasonable accommodation might include, for example, making the workplace accessible for wheelchair users or providing a reader or interpreter for someone who is blind or hearing impaired. (U.S. Equal Employment Opportunity Commission)
Underrepresented Groups (URG). A group that is less represented in one subset (e.g., employees in a particular sector, such as IT) than in the general population. This can refer to gender, race/ethnicity, physical or mental ability, LGBTQ+ status, and many more. (IGI Global)
AI Ethics, Fairness & Design
Accountability (Equitable AI Principle). Organizations should be accountable for how they build and use AI-enabled workplace technologies, seeking feedback, addressing unintended uses, offering a way for people to contest and correct harmful automated decisions, and following organizational governance processes. (PEAT)
Affirmative Consent. A requirement in Ethical AI, a part of Agency, that says organizations should secure approval (consent) from people on how or whether they interact with an AI-enabled system. The idea here is that people understand exactly how their data will be used and if consent is given then their data is limited only to that permitted use. The defaults for affirmative consent are “opt-in,” and if people elect not to opt-in, affirmative consent requires that they will not suffer any penalty or denial of access to platforms or services as a result. Unlike the terms of service that tech companies require people to click through to use their platforms, affirmative consent for AI cannot be coerced. (Adapted from the Algorithmic Justice League)
Agency. In AI, refers to a requirement in Ethical AI that people have agency and control over how they interact with AI-enabled systems. This means that people must first be aware of how the AI-enabled systems are being used, who is involved in creating and deploying them, and the risk and potential harms of using them. (Adapted from the Algorithmic Justice League)
AI Ethics. A set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. These values, principles, and techniques are intended both to motivate morally acceptable practices and to prescribe the basic duties and obligations necessary to produce ethical, fair, and safe AI applications.
(Turing Institute)
AI Ethics Board (or Committee). Provides thought leadership and guidance on how the organization researches and exploits AI technology and associated data. A key property of an AI ethics board is that it should be accountable to another body so that it can be challenged if there are any doubts regarding its behavior. (CMS)
Automated Decision System. A system that uses automated reasoning to aid or replace a decision-making process that would otherwise be performed by humans. Al automated decision systems are designed by humans and involve some degree of human involvement in their operation. Humans are ultimately responsible for how a system receives its inputs (e.g., who collects the data that feeds into a system), how the system is used, and how a system’s outputs are interpreted and acted on. (AI Now Institute)
Bias (in AI). AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. (Harvard Business Review)
Co-Design. An approach to design attempting to actively involve all stakeholders (e.g., employees, partners, customers, citizens, end users) in the design process to help ensure the result meets their needs and is usable. Also known as co-operative design or participatory design, co-design is an approach which is focused on processes and procedures of design and is not a design style. (Wikipedia)
Continuous Oversight. In the context of AI, refers to a requirement related to Accountability, that AI requires continuous oversight by independent third parties, on a voluntary basis or where laws require organizations to do so. Supporting minimum requirements for continuous oversight may include: maintaining on-going documentation, submitting to audit requirements, and allowing access to civil society organizations for assessment and review. (Adapted from the Algorithmic Justice League)
Corporate Social Responsibility. A business model that helps a company be socially accountable to itself, its stakeholders, and the public. CSR initiatives seek to make a positive impact on local communities and the environment. It is the way through which a company achieves a balance of economic, environmental, and social imperatives. (UNIDO)
Data Privacy (Equitable AI Principle). AI-enabled workplace technologies should respect privacy and enable individuals to access, control, and securely share their personal data. (PEAT)
Employee Resource Groups (ERGs). An employer-sponsored or recognized affinity group of employees organized around common dimensions (e.g., similar backgrounds, experiences, or interests) to build community, network, share views, learn from others, further professional growth, and development, improve product and service design, and drive business. (PEAT)
Equitable AI in the Workplace. AI technologies intentionally designed, developed, and implemented to result in equitable outcomes for everyone. In the workplace, Equitable AI aims to produce fairer outcomes, increase opportunities, and improve workplace success for all workers regardless of race, ethnicity, disability, age, gender identity or expression, religion, sexual orientation, or economic status. (PEAT)
Equitable AI Principles. A set of fundamental concepts that are aligned to various AI ethics frameworks. These principles can help organizations mitigate discrimination and bias to improve equitable outcomes when developing or implementing AI technologies. These principles include: Inclusive Empowerment, Equitable Design, Data Privacy, Reliability, Transparency, and Accountability. (PEAT)
Equitable Design (Equitable AI Principle). AI-enabled workplace technologies should treat all people fairly, be designed by diverse teams, offer accessible user interfaces, and have equitable outcomes. (PEAT)
Ethical AI. A branch of the ethics of technology specific to AI, concerned with both the moral behavior of humans that design, make, use and treat artificially intelligent systems, and the behavior of artificially intelligent systems and machines that perform automated tasks and decisions. (Wikipedia)
Explainable Artificial Intelligence (XAI). Efforts made to make sure that artificial intelligence programs are transparent in their purposes and how they work. Explainable AI is a common goal and objective for engineers and others trying to move forward with artificial intelligence progress. (Techopedia)
Fairness in AI. The degree to which the results of using artificially intelligent systems are similar for different groups or individuals. Group fairness requires that different groups—defined by specific attributes often protected by laws (e.g., gender identity or expression, race or ethnicity, disability status, etc.)—should have similar outcomes when facing the same situation. Individual fairness requires that similar individuals should have equivalent outcomes considering the context. (Kenn So, “A primer on AI fairness: What it is and the tradeoffs to be made” Medium, Jul 13, 2019)
Impermissible Use. Refers to a requirement in Ethical AI that organizations prevent AI from being used by those with power to increase their absolute level of control, particularly where it would automate long-standing patterns of injustice (e.g., racial profiling, race/gender/disability bias in hiring, etc.), where lethal force is an option, or in surveillance. (Adapted from the Algorithmic Justice League)
Inclusive Empowerment (Equitable AI Principle). AI-enabled workplace technologies should engage and empower everyone, including people with disabilities. (PEAT)
Redress Harms. In AI, refers to a requirement that is part of Accountability, that organizations provide people who have been harmed with access to remedy, meaning that there is a working pathway for people to contest and correct a harmful decision made by AI. For example, if an AI-enabled system was suspected of disqualifying a job applicant based on gender, race or disability, remedy would allow the applicant to discover how the decision was made and provide a basis for challenging the decision in court. (Adapted from the Algorithmic Justice League)
Reliability (Equitable AI Principle). AI-enabled workplace technologies should support human oversight and have explicit, well-defined uses, and be measurable in terms of effectiveness for those uses. (PEAT)
Responsible AI. A governance framework that documents how a specific organization is addressing the challenges around artificial intelligence (AI) from both an ethical and legal point of view. Resolving ambiguity for where responsibility lies if something goes wrong is an important driver for responsible AI initiatives. (TechTarget)
Risk Management. A process for ensuring systems are trustworthy by design by establishing a methodology for identifying risks and mitigating their potential impact. Rather than evaluating a product or service against a static set of prescriptive requirements that quickly become outdated, risk management seeks to integrate compliance responsibilities into the development pipeline to help mitigate risks throughout a product or service’s lifecycle. Effective risk management is anchored around a governance framework that promotes collaboration between an organization’s development team and its compliance personnel at key points during the design, development, and deployment of a product. (BSA Framework to Build Trust in AI)
Surveillance (in AI). Using AI to monitor behavior, activities, or information for the purpose of information gathering, influencing, managing, or directing. Some example methods for conducting surveillance include computers, telephones, cameras, biometric, aerial, corporate systems, data mining, profiling, wireless tracking, internet of things, and social network analysis. (Wikipedia)
Transparency (Equitable AI Principle). Organizations should be transparent about when and how they are using AI, and provide documentation of data sources, design, privacy, security, and reliability. Critically, meaningful transparency allows people to clearly understand the intended capabilities and known limitations of the AI. To demonstrate meaningful transparency, organizations must share information about how AI is being used in their own decision-making processes and sold to others. The goal is for people to understand the societal risks every time they encounter an AI system and how their data is being used by people in power to make decisions that affect them. Sharing this information may be supported by reporting requirements that are mandated through law or agreed to through codes of conduct. (Adapted from the Algorithmic Justice League)