Glossary Flashcards
Accountability
The obligation and responsibility of the creators, operators and regulators of an AI system to ensure the system operates in a manner that is ethical, fair, transparent and compliant with applicable rules and regulations (see fairness and transparency). Accountability ensures the actions, decisions and outcomes of an AI system can be traced back to the entity responsible for it.
Active learning
A subfield of AI and machinelearning where an algorithm can select some of the data it learns from. Instead of learning from all the data it is given, an active learning model requests additional data points that will help it learn the best.
→ Also called query learning.
Adversarial machine learning
A machinelearning technique that raises a safety and security risk to the model and can be seen as an attack. These attacks can be instigated by manipulating the model, such as by introducing malicious or deceptive inputdata. Such attacks can cause the model to
malfunction and generate incorrect or unsafe outputs, which can have significant impacts. For example, manipulating the inputs of a self-driving car may fool the model to perceive a red light as a green one, adversely impacting road safety.
AI governance
A system of laws, policies, frameworks, practices and processes at international, national and organizational levels. AI governance helps various stakeholders implement, manage and oversee the use of AI technology. It also helps manage associated risks to ensure AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable requirements.
Algorithm
A procedure or set of instructions and rules designed to perform a specific task or solve a particular problem, using a computer.
Artificial general intelligence
AI that is considered to have human-level intelligence and strong generalization capability to achieve goals and carry out a variety of tasks in different contexts and environments. AGI still remains a theoretical field of research. It is contrasted with narrow AI, which is used for specific tasks or problems.
→ Acronym: AGI
Artificial intelligence
Artificial intelligence is a broad term used to describe an engineered system that uses various computational techniques to perform or automate tasks. This may include techniques, such as machinelearning, where machines learn from experience, adjusting to new inputdata and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. It may include automateddecision-making.
→ Acronym: AI
Automated decision-making
The process of making a decision by technological means without human involvement, either in whole or in part.
Bias
There are several types of bias within the AI field. Computational bias is a systematic error or deviation from the true value of a prediction that originates from a model’s assumptions or the inputdata itself. Cognitive bias refers to inaccurate individual judgment or distorted thinking, while societal bias leads to systemic prejudice, favoritism and/or discrimination in favor of or against an individual or group. Bias can impact outcomes and pose a risk to individual rights and liberties.
Bootstrap aggregating
A machinelearning method that aggregates multiple versions of a model (see machine learningmodel) trained on random subsets of a dataset. This method aims to make a model more stable and accurate.
→ Sometimes referred to as bagging.
Chatbot
A form of AI designed to simulate human-like conversations and interactions that uses naturallanguageprocessing and deeplearning to understand and respond to text or other media. Because chatbots are often used for customer service and other personal help applications, chatbots often ingest users’ personal information.
Classification model
A type of model (see machinelearningmodel) used in machinelearning that is designed to take input data and sort it into different categories or classes.
→ Sometimes referred to as classifiers.
Clustering
An unsupervisedmachinelearning method where patterns in the data are identified and evaluated, and data points are grouped accordingly into clusters based on their similarity.
→ Sometimes referred to as clustering algorithms.
Compute
Refers to the processing resources that are available to a computer system. This includes the hardware components such as the central processing unit or graphics processing unit. Computing is essential for memory, storage, processing data, running applications, rendering graphics for visual media, powering cloud computing, among others.
Computer vision
A field of AI that enables computers to process and analyze images, videos and other visual inputs.
Conformity assessment
An analysis, often performed by a third-party body, on an AI system to determine whether requirements, such as establishing a risk-management system, data governance, record keeping, transparency and cybersecurity practices, have been met. Often referred to as audit.
Contestability
The principle of ensuring that AI systems and their decision-making processes can be questioned or challenged. This ability to contest or challenge the outcomes, outputs and/or actions of AI systems can help promote transparency and accountability within AIgovernance.
→ Also called redress.
Corpus
A large collection of texts or data that a computer uses to find patterns, make predictions or generate specific outcomes. The corpus may include structured or unstructured data and cover a specific topic or a variety of topics.
Decision tree
A type of supervisedlearning model used in machine learning (see machine learningmodel) that represents decisions and their potential consequences in a branching structure.
Deep learning
A subfield of AI and machinelearning that uses artificial neuralnetworks. Deep learning
is especially useful in fields where raw data needs to be processed, like image recognition, naturallanguageprocessing and speech recognition.
Deepfakes
Audiovisual content that has been altered or manipulated using AI techniques. Deepfakes can be used to spread misinformation and disinformation.
Discriminative model
A type of model (see machinelearningmodel) used in machinelearning that directly maps input features to class labels and analyzes for patterns that can help distinguish between different classes. It is often used for text classification tasks, like identifying the language of a piece of text. Examples are traditional neuralnetworks, decisiontrees and randomforests.
Disinformation
Audiovisual content, information and synthetic data that is intentionally manipulated or created to cause harm. Disinformation can spread through deepfakes by those with malicious intentions.
Entropy
The measure of unpredictability or randomness in a set of data used in machinelearning. A higher entropy signifies greater uncertainty in predicting outcomes.
Expert system
A form of AI that draws inferences from a knowledge base to replicate the decision-making abilities of a human expert within a specific field, like a medical diagnosis.
Explainability
The ability to describe or provide sufficient information about how an AI system generates a specific output or arrives at a decision in a specific context to a predetermined addressee. XAI is important in maintaining transparency and trust in AI.
→ Acronym: XAI
Exploratory data analysis
Data discovery process techniques that take place before training a machinelearning model in order to gain preliminary insights into a dataset, such as identifying patterns, outliers, and anomalies and finding relationships among variables.