Key Terms Flashcards
Accountability
The obligations and responsibilities of an AI system’s developers and deployers to ensure the system operates in a manner that is ethical, fair, transparent and compliant with applicable rules and regulations (see also fairness and transparency). Accountability ensures the actions, decisions and outcomes of an AI system can be traced back to the entity responsible for it.
Accuracy
The degree to which an AI system correctly performs its intended task. It is the measure of the system’s performance and effectiveness in producing correct outputs based on its input data. Accuracy is a critical metric in evaluating the reliability of an AI model, especially in applications requiring high precision, such as medical diagnoses.
Active Learning
A subfield of AI and machine learning in which an algorithm selects some of the data it learns from. Instead of learning from all the data it is given, an active learning model requests additional data points that will help it learn the best.
Adaptive Learning
A method that adjusts and tailors educational content to the specific needs, abilities and learning pace of individual students. The purpose of adaptive learning is to provide a personalized and optimized learning experience, catering to the diverse learning styles of students.
Adversarial Attack
A safety and security risk to the AI model that can be instigated by manipulating the model, such as by introducing malicious or deceptive input data. Such attacks can cause the model to malfunction and generate incorrect or unsafe outputs, which can have significant impacts. For example, manipulating the inputs of a self-driving car may fool the model to perceive a red light as a green one, adversely impacting road safety.
AI Assurance
A combination of frameworks, policies, processes and controls that measure, evaluate and promote safe, reliable and trustworthy AI. AI assurance schemes may include conformity, impact and risk assessments, AI audits, certifications, testing and evaluation, and compliance with relevant standards.
AI Audit
A review and assessment of an AI system to ensure it operates as intended and complies with relevant laws, regulations and standards. An AI audit can help identify and map risks and offer mitigation strategies.
AI Governance
A system of laws, policies, frameworks, practices and processes at international, national and organizational levels. AI governance helps various stakeholders implement, manage, oversee and regulate the development, deployment and use of AI technology. It also helps manage associated risks to ensure AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable legal and regulatory requirements.
Algorithm
A procedure or set of instructions and rules designed to perform a specific task or solve a particular problem using a computer.
Artificial General Intelligence
AI that is considered to have human-level intelligence and strong generalization capability to achieve goals and carry out a broad range of tasks in different contexts and environments. AGI remains a theoretical field of research. It is contrasted with “narrow” AI, which is used for specific tasks or problems.
Artificial Intelligence
Artificial intelligence is a broad term used to describe an engineered system that uses various computational techniques to perform or automate tasks. This may include techniques, such as machine learning, in which machines learn from experience, adjusting to new input data and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. It may include automated decision-making.
Automated Decision Making
The process of making a decision by technological means without human involvement, either in whole or in part.
Bias
There are several types of bias within the AI field.
- Computational bias or machine bias is a systematic error or deviation from the true value of a prediction that originates from a model’s assumptions or the data itself (see also input data).
- Cognitive bias refers to inaccurate individual judgment or distorted thinking, while societal bias leads to systemic prejudice, favoritism and/or discrimination in favor of or against an individual or group.
Either or both may permeate the model or the system in numerous ways, such as through selection bias, i.e. biases in selecting data for model training. Bias can impact outcomes and pose a risk to individual rights and liberties.
Bootstrap Aggregating
A machine learning method that aggregates multiple versions of a model (see also machine learning model) trained on random subsets of a dataset. This method aims to make a model more stable and accurate.
Chatbot
A form of AI designed to simulate human-like conversations and interactions that uses natural language processing and deep learning to understand and respond to text or speech.
Classification Model
A type of model (see also machine learning model) used in machine learning that is designed to take input data and sort it into different categories or classes.
Clustering
An unsupervised machine learning method in which patterns in the data are identified and evaluated, and data points are grouped accordingly into clusters based on their similarity.
Compute
The processing resources that are available to a computer system. This includes hardware components such as the central processing unit or graphics processing unit. Compute is essential for memory, storage, processing data, running applications, rendering graphics for visual media and powering cloud computing, among others.
Computer Vision
A field of AI that uses computers to process and analyze images, videos and other visual inputs. Common applications of computer vision include facial recognition, object recognition and medical imaging.
Conformity Assessment
An analysis, often performed by an entity independent of a model developer, on an AI system to determine whether requirements, such as establishing a risk management system, data governance, record-keeping, transparency and cybersecurity practices have been met.
Contestability
The principle of ensuring AI systems and their decision-making processes can be questioned or challenged by humans. This ability to contest or challenge the outcomes, outputs and actions of AI systems depends on transparency and helps promote accountability within AI governance.
Corpus
A large collection of texts or data that a computer uses to find patterns, make predictions or generate specific outcomes. The corpus may include structured or unstructured data and cover a specific topic or a variety of topics.
Data Leak
An accidental exposure of sensitive, personal, confidential or proprietary data. This can be a result of poor security defenses, human error, storage misconfigurations or a lack of robust policies around internal and external data sharing practices. Unlike a data breach, a data leak is unintentional and not done in bad faith.
Data Poisoning
An adversarial attack in which a malicious user injects false data into a model to manipulate the training process, thereby corrupting the learning algorithm. The goal is to introduce intentional errors into the training dataset, leading to compromised performance and resulting in undesired, misleading or harmful outputs.
Data Provenance
A process that tracks and logs the history and origin of records in a dataset, encompassing the entire life cycle from its creation and collection to its transformation to its current state. It includes information about sources, processes, actors and methods used to ensure data integrity and quality. Data provenance is essential for data transparency and governance, and it promotes better understanding of the data and eventually the entire AI system.
Data Quality
The measure of how well a dataset meets the specific requirements and expectations for its intended use. Data quality directly impacts the quality of AI outputs and the performance of an AI system. High-quality data is accurate, complete, valid, consistent, timely and fit for purpose.
Decision Tree
A type of supervised learning model used in machine learning (see also machine learning model) that represents decisions and their potential consequences in a branching structure.
Deep Learning
A subfield of AI and machine learning that uses artificial neural networks. Deep learning is especially useful in fields where raw data needs to be processed, like image recognition, natural language processing and speech recognition.
Deep Fakes
Audio or visual content that has been altered or manipulated using artificial intelligence techniques. Deepfakes can be used to spread misinformation and disinformation.
Diffusion Model
A generative model used in image generation that works by iteratively refining a noise signal to transform it into a realistic image when prompted.
Discriminative Model
A type of model (see also machine learning model) used in machine learning that directly maps input features to class labels and analyzes for patterns that can help distinguish between different classes. It is often used for text classification tasks, like identifying the language of a piece of text or detecting spam. Examples are traditional neural networks, decision trees and random forest.
Disinformation
Audio or visual content that is intentionally manipulated or created to cause harm. Disinformation can spread through deepfakes created by those who have malicious intentions.
Entropy
The measure of unpredictability or randomness in a set of data used in machine learning. A higher entropy signifies greater uncertainty in predicting outcomes.
Expert System
A form of rules-based AI that draws inferences from a knowledge base provided by human experts to replicate their decision-making abilities within a specific field, like medical diagnoses.
Explainability
The ability to describe or provide sufficient information about how an AI system generates a specific output or arrives at a decision in a specific context. Explainability is important in maintaining transparency and trust in AI.
Exploratory Data Analysis
Data discovery process techniques that take place before training a machine learning model to gain preliminary insights into a dataset, such as identifying patterns, outliers and anomalies and finding relationships among variables.