IAPP IAGP Flashcards
in general - Including from ISO
What is AI?
Machines performing tasks that normally require human intelligence.
What is the Turing Test?
A test to determine if a machine is intelligent by assessing if its responses can fool an interviewer into thinking it is human.
What are common elements of AI definitions?
Technology, Autonomy, Human involvement, Output.
How is AI described as a socio-technical system?
AI influences society and society also influences AI.
What are the risks involved in using AI?
Complexity and the fact that data changes over time.
What is the OECD Framework for classifying AI systems?
A framework that helps organizations classify AI systems and examine risk.
What are the five dimensions of the OECD Framework?
People and planet dimension, Economic context dimension, Data and input dimension, AI model dimension, Tasks and Output dimension.
What are the high-level categories of AI?
Artificial narrow intelligence (ANI), Artificial general intelligence (AGI), Artificial super intelligence (ASI).
What is Artificial Narrow Intelligence (ANI)?
Systems that perform specific, narrowly-defined tasks.
What is Artificial General Intelligence (AGI)?
Human level intelligence; reasoning, learning, and solving problems. AGI does not exist yet.
What is Artificial Super Intelligence (ASI)?
Systems that would surpass human intelligence; experts believe ASI is feasible if AGI is achieved.
What is Broad Artificial Intelligence?
A category of AI more advanced than ANI, capable of performing a broader set of tasks but not sophisticated enough to be considered AGI.
What are some opportunities provided by AI?
Faster and more accurate results, improved medical assessments, legal predictions, processing large volumes of data, and reducing human error and bias.
What are some AI use cases?
Recognition (image, speech, face), Detection (cyber events), Forecasting (sales, revenue, transportation), Personalization (personal profiles, unique experience), Goal-driven optimization (supply chain, driving routes).
What is AI governance?
AI governance is an organization’s approach to using laws, policies, frameworks, practices, and processes at international, national, and organizational levels.
What is the purpose of AI governance?
To help stakeholders in implementing, managing, overseeing, and regulating the development, deployment, and use of AI technology.
How does AI governance manage associated risks?
By ensuring AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable requirements.
What issues can AI governance help address?
Bias, privacy impacts, misuse, while also helping to increase innovation and trust.
What are AI governance principles?
A set of values.
What is an AI governance framework?
A means to operationalize AI governance principles.
Why is it important for governance professionals to understand common AI models?
To apply sound governance practices and ensure ethical and responsible use of AI technologies.
What do linear and statistical models do?
Model the relationship between two variables.
What do decision tree models predict?
An outcome based on a flowchart of questions and answers.
What characterizes machine learning models?
Black box capabilities; lack transparency and explainability.
What is a neural network type used for computer vision?
Computer vision model, images, and videos.
What is a neural network type used for speech recognition?
Speech recognition model – Alexa or transcription software.
What do reinforcement learning models rely on?
Trial and error interaction.
What is the role of robotics in AI?
Allow AI systems and software to interact with the physical world without human intervention.
What is deep learning?
A subset of machine learning inspired by the human brain that uses neural networks.
What tasks is deep learning well-suited for?
Image recognition, natural language processing, and speech recognition.
What is required for deep learning models to perform effectively?
Large databases.
What can generative AI models do?
Generate new data based on learning from training data.
Give an example of generative AI.
OpenAI’s DALL-E, Stable Diffusion.
What do multimodal models process?
Information from multiple modalities (e.g., text, images, audio, video).
What is an example of a multimodal model?
Clause, AI assistant by Anthropic.
What are transformer models known for?
Revolutionizing natural language processing and generating contextually sensitive output.
What is deep learning?
A subset of machine learning using multi-layered neural networks to simulate the complexities of the human brain.
What are the benefits of deep learning over traditional machine learning?
Efficient processing of unstructured data, hidden relationships and pattern discovery, unsupervised learning capabilities.
Define generative AI.
Deep learning models that can generate text, images, video, and other output based on their training data.
What is the process of teaching AI systems called?
Machine learning.
How do machine learning systems improve over time?
By learning and making decisions repeatedly without being explicitly instructed.
List the types of machine learning.
- Supervised learning
- Unsupervised learning
- Reinforcement learning
What is the goal of supervised learning?
Accurate predictions of the output of new data.
What is a challenge of supervised learning?
Requires a large amount of labeled data; labeling may introduce bias.
What are the two subcategories of supervised learning models?
- Classification models
- Prediction models
What type of data does unsupervised learning use?
Unlabeled data.
What is the goal of unsupervised learning?
Find patterns, structures, or relationships without predefined targets.
What is a strength of unsupervised learning?
Discovering hidden patterns and insights in data.
What are the two categories of unsupervised learning models?
- Clustering
- Association rule learning
Define semi-supervised learning.
Uses a small amount of labeled data and a large amount of unlabeled data in training.
What is a challenge of semi-supervised learning?
Quality of labeled data; algorithm that can leverage labeled and unlabeled data.
What are large language models (LLMs)?
AI using deep learning algorithms trained on massive text datasets to analyze and learn patterns among characters, words, and phrases.
Provide examples of semi-supervised learning models.
- Image analysis
- Speech analysis
- Categorization of web page search results
- ChatGPT
- DALL-E
What is reinforcement learning?
Learning to make decisions by interacting with the environment and receiving feedback through rewards and penalties.
What is a strength of reinforcement learning?
Can learn complex behaviors without explicit supervision.
What is a challenge of reinforcement learning?
Designing an appropriate reward mechanism.
Fill in the blank: The learning method that uses trial and error is called _______.
Reinforcement learning.
What is natural language processing (NLP)?
Machine learning technology enabling computers and digital devices to recognize, understand and generate text and speech
NLP combines computational linguistics and machine learning models to process human language.
What are common text processing and analyzing tasks in NLP?
- Tagging parts of speech
- Word sense disambiguation
- Speech recognition
- Sentiment analysis
These tasks help in interpreting and analyzing human language.
What are common use cases of NLP?
- Spam detection
- Language translation
- Chatbots
- Speech to text
NLP applications are widespread across various industries.
Which major companies are advancing NLP?
- Microsoft
- IBM
- Apple
- Amazon
These companies are heavily investing in NLP technologies.
What characterizes classic models in NLP?
Typically focused on specific tasks with deterministic outputs, relying on structured algorithms and fixed rules, such as decision trees or linear regression
Classic models analyze data and make predictions.
What are generative models in NLP?
Models like GPTs that can create new data instances resembling training data
They learn the underlying distribution of the input data.
What is the primary function of proprietary models?
Developed by specific organizations, usually restricted in access and use for commercial applications
This can limit transparency and independent auditing.
What are open-source models?
Publicly available models for anyone to use, modify, and distribute
They promote collaboration and innovation but may carry risks regarding quality control and security.
Fill in the blank: NLP combines computational linguistics and _______ to process human language.
[machine learning models]
True or False: Generative models can only analyze existing data without creating new instances.
False
Ord
Definition
Accountability
The obligation of individuals or organizations to accept responsibility for the ethical development, deployment, and use of AI systems, , ensuring compliance with laws and societal values.
state of being accountable
accountable
answerable for actions, decisions and performance
Accountable
AI System
An AI system designed with clear lines of responsibility, ensuring those involved can be held accountable for, its performance and adherence to ethical standards
Accuracy
Attempts to deceive or manipulate AI I models by introducing misleading inputs, highlighting vulnerabilities that need to be understood and mitigated.
activation function
function applied to the weighted combination of all inputs to a neuron
Adaptability
The ability of AI systems to learn from new data and adjust to changing environments, maintaining effectiveness and relevance over time.
Adversarial attacks
Simulating malicious attacks on AI systems to uncover potential security weaknesses, strengthening defenses and ensuring robustness against real threats.
Adversarial testing
Policies and procedures established to identify, address, and remedy issues, related to AI systems, ensuring responsible practices and ethical compliance
AI Accountability Policies
Policies and procedures established to identify, address, and remedy issues related to AI systems, ensuring responsible practices and ethical compliance
AI agent�
Automated entity that senses and responds to its environment and takes actions to achieve its goals�
AI Appliocations
Software programs utilizing artificial intelligence to perform specific tasks, playing a crucial role in implementing AI governance across various domains.
AI Auditing
The systematic examination of AI systems to verify adherence to ethical principles and legal requirements, safeguarding societal well-being and fairness.
AI component�
functional element that constructs an AI system��
AI Development
Life Cycle
A structured, multi-stage process for developing and deploying AI systems, including planning, data collection, model building, deployment, and monitoring.
AI Innovation and Regulation Balance
Balancing the need for AI innovation with regulatory measures to protect public safety and interests, ensuring technological advancement aligns with ethics
AI Maturity Level Assessment
Regular evaluations of an organization’s AI governance processes to identify improvement areas and ensure alignment with best practices and regulations
AI Platforms
Cloud-basen environments providing essential tools and services for creating, training and deploying AI models, crusial for AI development.
AI Regulation Challenges
Difficulties in regulating AI due to its complexity, rapid advancement, and global nature, making it challenging to develop effective, up-to-date laws.
AI Regulatory Opportunities
Opportunities to promote responsible AI development and boost innovation through effective regulation, fostering I trust and ethical industry practices.
AI Regulatory Requirements
Legal and ethical obligations organizations must meet when developing and deploying AI systems, ensuring compliance with applicable laws and standards.
AI Risk Impact Assessment
Evaluating potential effects of AI-specific risks on an organization’s operations, reputation, and finances to inform mitigation strategies.
AI Risk Integration
Embedding AI risk management including identification, assessment, and mitigation into an organization’s overall operational risk management, framework.
AI Risk Monitoring and Review
Potential harms or unintended consequences arising from AI systems, including ethical, security, privacy, and societal impacts that need to be identified and managed.
AI Risks
Continuous monitoring and reassessment of AI risks to ensure alignment with the organization’s evolving risk profile and external factors.
AI Safety and Security
Researching and developing methods to ensure AI systems are safe, reliable, and secure from adversarial attacks and security threats.
AI Specific Risks
Unique risks associated with AI systems, such as algorithmic bias, security vulnerabilities, and privacy concerns, requiring specialized risk management strategies.
AI System Architecture
The overall structural design of an AI system, including its components and interactions, crucial for functionality, scalability, and compliance.
AI Technology Stack
The comprehensive set of software and hardware components used to develop, train, deploy, and manage AI systems, including frameworks and infrastructure
Algoriothm
A set of rules or step-by-step instructions designed to perform a specific task or solve a problem, fundamental in computer science and AI development.
Algorithmic Accountability
The responsibility of organizations to ensure their algorithms are transparent, fair, and answerable, covering the seven features of accountability architecture, context, range, agent, forum, standards, process, and implications.
Algorithmic Fairness
The practice of designing algorithms to make impartial decisions, avoiding biases, incorporating awareness-based fairness and rationality-based fairness to prevent discrimination against any individual or group.
Algorithmic Impact Assessments
Evaluations conducted to understand the effects of algorithms on society, aiming to identify and mitigate potential biases, discrimination, or other harms
Anonymization in AI
The process of removing personally identifiable information from data sets so that individuals cannot be readily identified, enhancing privacy protections in AI systems.
API
application programming interface
application specific integrated circuit ASIC
integrated circuit customized for a particular use
Appropriate Deployment Strategies
Selecting suitable methods for integrating AI systems into operational environments, considering factors like purpose, risk, and regulatory compliance to ensure effectiveness and safety.
Artificial Intelligence (AI)
The simulation of human intelligence processes by machines, especially computer systems, enabling them to perform tasks that typically require human intellect.
artificial intelligence AI�
discipline research and development of mechanisms and applications of AI systems
artificial intelligence system�
AI system
engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives
Artificial Reality (AR)
Technology that superimposes digital information onto the physical world, enhancing real-world experiences with computer-generated perceptual information, and reflecting current trends and applications in augmented
reality(AR)
Assessing Remediability of Adverse Impacts
Evaluating how negative effects caused by AI systems can be corrected or mitigated, ensuring harms can be addressed effectively and promptly.
Auditing Standards and Frameworks
Established guidelines for systematically evaluating AI systems to ensure compliance with ethical criteria like safety, security, fairness, and accountability.
automatic
automation
automated
pertaining to a process or system that, under specified conditions, functions without human intervention
automatic summarization
task of shortening a portion of natural language content or text while retaining important semantic information
Automation Bias
The propensity for humans to favor suggestions from automated decision making systems, potentially overlooking errors or biases inherent in I those systems.
autonomy
autonomous
characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight
Autonomy in AI
The capability of AI systems to operate independently without human, intervention, which necessitates careful design to prevent unintended consequences.
availability
property of being accessible and usable on demand by an authorized entity
Bayesian network
probabilistic model that uses Bayesian inference for probability computations using a directed acyclic graph
Bias
Systematic deviation in AI outcomes, that unfairly favor or disadvantage certain individuals or groups, often stemming from prejudiced data or algorithms, or biases inherent in the algorithm’s design
Bias and Fairness
Addressing and correcting biases in AI models to ensure equitable treatment and decision-making across diverse user groups, promoting justice and fairness
Bug Bashing and Teaming
Testing methods where teams attempt, to find and exploit vulnerabilities in AI systems to identify weaknesses and enhance security and reliability.
Business Case
A documented justification for undertaking an AI project, outlining benefits, costs, and risks to support informed decision-making and resource allocation
Business Needs
The specific problems or opportunities an AI system aims to address within an organization, aligning technological solutions with strategic objectives.
Canada’s Digital Charter Implementation Act 2023 (Bill C-27)
Proposed Canadian legislation introducing the Artificial Intelligence and Data Act (AIDA), aiming to regulate AI systems and protect personal information.
Central Inventory of AI Systems
A comprehensive record within an organization listing all AI systems in use, including details on purpose, data sources, outputs, and associated risks.
Challenger Models
Alternative AI models developed to test and challenge incumbent models, promoting innovation and preventing complacency by encouraging continuous improvement.
China’s AI Regulations
TekstfeltUpdated regulations focusing on generative AI in China, requiring developers to register systems, implement misuse prevention measures, and ensure compliance with China’s stringent regulatory framework
Cloud Computing
The on-demand availability of computing resources over the internet, allowing for scalable and flexible access to storage, processing power, and applications.
cognitive computing
category of AI systems that enables people and machines to interact more naturally
Collaborative Legal Framework Development
Cooperative efforts among governments, industry, academia, and civil society to create legal frameworks addressing the complexities of AI governance
Communication and Transparency
The practice of openly sharing information about AI systems’ functionalities and limitations to build trust and enable informed decisions by users and stakeholders
Compute Infrastructure
The hardware resources, including servers and specialized processors like GPUs or TPUs, essential for training and deploying AI systems efficiently
computer vision
capability of a functional unit to acquire, process and interpret data representing images or video
connectionism
connectionist paradigm
connectionist model
connectionist approach
form of cognitive modelling that uses a network of interconnected units that generally are simple computational units
Connectionist AI
AI approaches inspired by neural networks in the human brain, where learning occurs through interconnected nodes processing data and recognizing patterns.
Consensus-Driven Planning and Design
Involving diverse stakeholders to collaboratively reach agreements in AI planning and design, ensuring decisions reflect shared goals and organizational values
Consent in Data Processing for AI
Obtaining explicit permission from individuals before processing their personal data in AI systems, especially, sensitive data, to comply with privacy laws like GDPR.
Continuous Feedback
Regularly collecting input from stakeholders throughout AI development to refine the system, address concerns, and enhance performance and user satisfaction over time.
continuous learning
continual learning
lifelong learning
incremental training of an AI system that takes place on an ongoing basis during the operation phase of the AI system life cycle�
Continuous Monitorieng and Maintenance
Regularly checking and updating AI systems to adapt to new scenarios, fix issues, and ensure continued effectiveness, security, and compliance throughout their operational life
control
purposeful action on or in a process to meet specified objectives
controllability
controllable
property of an AI system that allows a human or another external agent to intervene in the system�s functioning
convolution
mathematical operation involving a sliding dot product or cross-correlation of the input data
convolutional neural network CNN
deep convolutional neural network DCNN feed forward neural network using convolution in at least one of its layers
Counterfactual Explanation (CSFEs)
Methods that explain AI decisions by showing how minimal changes to jI input data could alter outcomes, helping users understand factors influencing the decision.
CPS
cyber-physical systems
CPU
central processing unit
CRISP-DM
cross-industry process model for data mining
Cross-Disciplinary Collaboration
Collaborative efforts involving experts from various fields to address AI, governance challenges, integrating, diverse knowledge for comprehensive solutions.
Cultural Norms
Aigning AI governance practices with local customs, values, and regulations to ensure AI systems are accepted and effective in different societal contexts.
data annotation
process of attaching a set of descriptive information to data without any change to that data
data augmentation
process of creating synthetic samples by modifying or utilizing the existing data
Data Curation
Selecting, cleaning, and managing data to ensure its quality, accuracy, and fairness for AI use, reducing biases and errors in AI systems
Data Governance
Policies and procedures for managing, data’s availability, usability, integrity, and security in AI systems, ensuring compliance and ethical data handling
Data Lineage and Provenance
Tracking the origin, movement, and transformations of data used in AI to ensure transparency, reproducibility, and trust in data integrity.
data mining
computational process that extracts patterns by analysing quantitative data from different perspectives and dimensions, categorizing them, and summarizing potential relationships and impacts
Data Ownershiep and Intellectual Property
Legal rights concerning who owns data used in AI training and who owns intellectual property generated by AI systems.
Data Protection Impact Assessments (DPIAs)
Evaluations required under GDPR to assess high-risk data processing activities, especially when AI systems handle large amounts of personal data.
data quality checking
process in which data is examined for completeness, bias and other factors which affect its usefulness for an AI system
data sampling
process to select a subset of data samples intended to present patterns and trends similar to that of the larger dataset being analysed
Data Science
An interdisciplinary field combining statistics, computer science, and domain expertise to extract insights and knowledge from data.
Data Strategy
A comprehensive plan outlining how an organization collects, manages, and uses data, including privacy-enhancing technologies, to support AI initiatives.
dataset
collection of data with a shared format
Deactivation and Localization
Methods for disabling or limiting AI systems’ operations or restricting them to certain regions to mitigate risks and comply with regulations.
decision tree
model for which inference is encoded as paths from the root to a leaf node in a tree structure
declarative knowledge
knowledge represented by facts, rules and theorems�
deep learning
deep neural network learning approach to creating rich hierarchical representations through the training
Deployment Method
The approach used to integrate an AI system into production environments, considering scalability, security, and regulatory compliance
dialogue management
task of choosing the appropriate next move in a dialogue based on user input, the dialogue history and other contextual knowledge to meet a desired goal
DNN
deep neural network
DSP
digital signal processor
Ecosystem Risks
Potential negative impacts of AI on ecological systems, including resource depletion, environmental damage, and supply chain disruptions.
Edge Cases Testing
Evaluating AI models using extreme or atypical data inputs to ensure reliability and robustness in unusual scenarios
emotion recognition
task of computationally identifying and categorizing emotions expressed in a piece of text, speech, video or image or combination thereof
Empowerment
Enabling individuals and groups to actively participate in AI development and governance, fostering inclusivity and equity in the AI ecosystem
Encouraging Responsible Innovation
Providing feedback and support to AI developers to design systems that are safe, secure, fair, and accountable, promoting ethical advancement.
Enforcement Framework and Penalties
Legal structures for investigating violations and innposing penalties for non-compliance with AI regulations like the EU AI Act.
Ethical Considerations
Reflecting on moral principles in AI development to ensure fairness, transparency, accountability, and respect for human rights.
Ethical Guidance Frameworks
Guiding values in AI governance, including fairness, non-discrimination, transparency, and accountability, informing ethical AI development and deployment.
Ethical Principles
Principles and guidelines established to promote responsible and ethical AI development and use, such as fairness and accountability.
Ethics
The philosophical study of moral values and rules, applied in AI to ensure responsible practices that align with societal norms and human rights.
EU AI Act
European legislation adopting a risk- based framework to classify AI systems, into risk levels, imposing regulatory, requirements to ensure safety, transparency, and protection of fundamental rights.
EU Digital Servieces Act (DSA)
European law imposing obligations on online platforms, including those using AI, to manage content responsibly and protect user privacy and fundamental rights.
EU Product Liability Directieve (PLD)
Legislation holding producers liable for defects in their products, including AI systems, updated to address AI- specific challenges in liability and consumer protection.
European Court of Human Right (ECtHR)
International court ensuring compliance with the European Convention on Human Rights, interpreting rights in AI contexts like privacy and non-discrimination.
Executive Order 14091 (EO 14091)
U.S. directive requiring federal agencies to ensure AI systems are unbiased and consider impacts on equity, promoting racial equity and supporting underserved communities.
expert system
AI system that accumulates, combines and encapsulates knowledge provided by a human expert or experts in a specific domain to infer solutions to problems
Explainability
The ability of AI systems to provide understandable explanations of their operations and decisions, enhancing transparency and user trust.
explainability
property of an AI system to express important factors influencing the AI system results in a way that humans can understand
Explainability in AI
Making AI models interpretable to humans, ensuring clarity in decision-making processes to build trust and facilitate accountability.
Explainable AI System
An AI system designed to provide understandable explanations for its decisions and actions, enhancing transparency, trust, ethical use, and accountability among users and stakeholders.
exploding gradient
phenomenon of backpropagation training in a neural network where large error gradients accumulate and result in very large updates to the weights, making the model unstable
exploratory data analysis
EDA initial examination of data to determine its salient characteristics and assess its quality
face recognition
automatic pattern recognition comparing stored images of human faces with the image of an actual face, indicating any matching, if it exists, and any data, if they exist, identifying the person to whom the face belongs
Fair information Practices (FIPs)
Guidelines for collecting, using, and disclosing personal information, emphasizing transparency, individual choice, and access to protect privacy rights.
Fairness
Ensuring AI systems make impartial decisions, treating all individuals equally and avoiding biases or discrimination in their outcomes.
Feature Engineering
The process of selecting, transforming, and creating variables (features) from raw data to improve AI model performance and predictive accuracy
feed forward neural network
FFNN
neural network where information is fed from the input layer to the output layer in one direction only
FPGA
field-programmable gate array
GDPR and Automated Decision-Making
GDPR restrictions on fully automated , decisions affecting individuals requiring transparency, human oversight, and fairness in AI systems
GDPR Compliance for AI Systems
Ensuring AI systems adhere to GDPR requirements, particularly data protection, privacy rights, and individual control over personal data.
general AI - AGI
artificial general intelligence type of AI system that addresses a broad range of tasks with a satisfactory level of performance
Generative AI
AI models capable of creating new content like text, images, or sound, based on patterns learned from existing data�for example, GPT-3 or DALLE.
genetic algorithm - GA
algorithm which simulates natural selection by creating and evolving a population of individuals (solutions) for optimization problems
Global AI Regulation
Worldwide initiatives to establish regulatory frameworks for AI, ensuring ethical standards, safety, and preventing harm across different countries
Goals
Defined targets or outcomes that an AI project seeks to accomplish, ensuring clarity, purpose, and alignment with organizational strategies
Governance
The framework of policies, practices, and procedures for responsible oversight and management of AI systems throughout their lifecycle.
Governance Framework
Structured policies and procedures that define how AI systems are developed, deployed, and monitored to ensure ethical and legal compliance.
GPU
graphics processing unit
ground truth
value of the target variable for a particular item of labelled input data
Group Risks
Potential harms or adverse impacts of AI systems on specific groups, such as discrimination or reinforcing existing inequalities affecting subpopulations.
heteronomy
heteronomous
characteristic of a system operating under the constraint of external intervention, control or oversight
Highrisk AI Systems
AI systems with significant potential to affect safety or fundamental rights, requiring strict regulatory compliance , and oversight under laws like the EU AI Act.
HMM
hidden Markov model
Holistic Risk Management
A comprehensive approach that considers all potential risks, including those associated with AI, integrating them into the overall risk management strategy.
HUDERIA (Human Rights Democracy, and the Rule of Law Impact Assessment)
A Council of Europe’s framework evaluating AI systems’ impact on human rights, democracy, and the rule of law, ensuring alignment with European values and legal standards.
Human- Centeredness in AI Frameworks
Designing AI systems with a focus on human values and needs, ensuring technology serves humanity and respects human rights and societal norms
Human Oversight
Ensuring AI systems are subject to human supervision, allowing for intervention or override to maintain ethical standards and prevent unintended consequences.
Human Oversight Requirements
Obligations for high-risk AI systems to include mechanisms allowing human intervention, ensuring control over AI i decisions and actions
Human-Centric AI Governance
An approach to AI governance that prioritizes human welfare, emphasizing the importance of considering diverse points and societal dimensions, while ensuring transparency, fairness, and user control
Human-Centric AI System
An AI system designed to prioritize human needs and values, ensuring it benefits users and respects human rights, autonomy, and dignity
human-machine teaming
integration of human interaction with machine intelligence capabilities
hyperparameter
characteristic of a machine learning algorithm that affects its learning process support vector machine; number of leaves or depth of a tree; the K for K-means clustering; the maximum number
IEEE 7000-2021
An Institute of Electrical and Electronics Engineers (IEEE) standard providing guidelines for addressing ‘ ethical concerns during system design, ensuring AI aligns with human values and ethical principles
image
graphical content intended to be presented visually
image recognition
image classification process that classifies object(s), pattern(s) or concept(s) in an image
Implications of AI Regulation
Potential impacts of AI laws on system development and deployment, including compliance costs, innovation effects, and market opportunities for compliant solutions.
imputation
procedure where missing data are replaced by estimated or modelled data
Inclusive AI
AI designed to be accessible and beneficial to all, regardless of background or circumstances ensuring diversity and accessibility in development.
Inclusive and Equitable Development
Creating AI systems that benefit all society members, avoiding amplification of existing inequalities and ensuring fair resource distribution.
Inclusivity
Designing AI systems to respect and accommodate all individuals regardless of characteristics like race, gender, age, or ability, promoting
Individual Risks
Potential harms AI systems may pose to individuals, such as discrimination, privacy violations, or safety hazards affecting personal well-being and rights
Industry-Agnostic Approach
Developing AI governance frameworks applicable across different industries, ensuring consistent ethical standards regardless of the sector.
inference
reasoning by which conclusions are derived from known premises
information retrieval - IR
task of retrieving relevant documents or parts of documents from a dataset typically based on keyword or natural language queries
input data
data for which an AI system calculates a predicted output or inference
Intellectual Property Law
Legal regulations protecting creations of the mind, relevant to AI in terms of ownership of Al-generated content and data used in training models.
Intelligent Self-Management
Empowering teams to proactively assess and address AI risks, fostering a culture of responsibility and continuous improvement in AI practices.
internet of things - IoT
infrastructure of interconnected entities, people, systems and information resources together with services that process and react to information from the physical world and virtual world
Internet of Things (IoT)
A network of interconnected devices that collect and exchange data over the internet, enhancing efficiency and enabling new applications through AI integration, with a focus on security and data privacy.
Interoperability
The ability of AI systems to work seamlessly within existing frameworks integrating AI risk management with overall organizational processes.
IoT device
entity of an IoT system that interacts and communicates with the physical world through sensing or actuating
IoT system
system providing functionalities of IoT
ISO 31OOO:2O18 Risk Management
An international standard providing guidelines for risk management, applicable to AI, outlining processes for identifying, evaluating, and mitigating risks.
ISO/IEC Guide 51
A standard offering guidelines for incorporating safety aspects into standards, aiding in developing AI safety standards or integrating safety considerations.
IT
information technology
KDD
knowledge discovery in data
knowledge
abstracted information about objects, events, concepts or rules, their relationships and properties, organized for goal-oriented systematic use
Knowledge Resources and Training
Providing educational materials and training on responsible AI practices to employees, enhancing understanding and ethical use of AI systems.
label
target variable assigned to a sample
Lack of Control
Users’ perception of powerlessness over their personal data in AI systems, leading to feelings of alienation and mistrust towards technology.
Lack of Transparency
The challenge where AI decision- making processes are not fully understandable to users, causing distrust due to opaque algorithms.
Law-Agnostic Approach
Designing AI governance frameworks adaptable to various legal systems and requirements, ensuring compliance across different jurisdictions.
Liability and Responsibility
Addressing who is legally accountable for harm caused by AI, considering roles of developers, deployers, and users in the Al lifecycle.
life cycle
evolution of a system, product, service, project or other human-made entity, from conception through retirement
Limited-risk AI Systems
AI systems that pose minimal risk to users or society, often used in applications like chatbots or email filtering with low potential for harm
Logical- Mathematical Principles
Foundational rules and concepts in AI and ML, including algorithms, logic, and statistical methods underpinning model development.
long short-term memory - LSTM
type of recurrent neural network that processes sequential data with a satisfactory performance for both long and short span dependencies
machine learning - ML
process of optimizing model parameters through computational techniques, such that the model’s behaviour reflects the data or experience
machine learning algorithm
algorithm to determine parameters of a machine learning model from data according to given criteria
machine learning model
mathematical construct that generates an inference or prediction based on input data or information
Machine Learning (ML)
A subset of AI where systems learn from data to improve performance on a task over time without being explicitly programmed for that task.
machine translation - MT
task of automated translation of text or speech from one natural language to another using a computer system
Maintenance
The ongoing process of updating and optimizing AI systems to ensure they function effectively, securely, and remain compliant with current standards.
Minimalrisk AI Systems
AI systems presenting little to no risk, often used in applications like gaming or spam filtering, where potential harm to individuals or society is minimal.
Model
A mathematical representation or algorithm used in AI to predict outcomes or recognize patterns based on input data, forming the core of AI functionality.
model
physical, mathematical or otherwise logical representation of a system, entity, phenomenon, process or data
Model Building
The process of developing an AI model by selecting algorithms and training it on data to perform specific tasks, crucial for effective AI solutions.
Model Selection
Choosing the most suitable AI model or algorithm for a given task based on performance metrics and problem requirements, ensuring optimal results.
Model Testing and Validation
Evaluating an AI model’s performance using test data to ensure reliability, accuracy, and generalizability before deployment in real-world scenarios.
Model Traning
The phase where an AI model learns from training data by adjusting parameters to recognize patterns and improve performance on specific tasks.
Model Versioning and Data Lineage
Tracking changes in AI models and their training data over time to ensure transparency, reproducibility, and accountability in AI development.
Monitoring
Ongoing supervision of AI systems to detect issues like bias, performance degradation, or security vulnerabilities, ensuring they operate as intended over time
Multi-modal Models
AI models capable of processing and integrating multiple data types�such as text, images, and audio�to perform complex tasks and improve performance.
Multiple Layers of Mitigation
Implementing various defense mechanisms in AI systems to prevent errors or failures, enhancing security and reliability through redundancy and safeguards.
named entity recognition - NER
task of recognizing and labelling the denotational names of entities and their categories for sequences of words in a stream of text or speech
narrow AI
type of AI system that is focused on defined tasks to address a specific problem
natural language
language that is or was in active use in a community of people and whose rules are deduced from usage
natural language generation - NLG
task of converting data carrying semantics into natural language
natural language processing - NLP
information processing based upon natural language understanding or natural language generation
natural language processing - NLP
discipline concerned with the way systems acquire, process and interpret natural language
Natural Language Processing (NLP)
A field of AI enabling computers to understand, interpret, and generate human language, facilitating interaction between humans and machines.
natural language understanding - NLU
natural language comprehension extraction of information, by a functional unit, from text or speech communicated to it in a natural language and the production of a description for both the given text or speech and what it represents
neural network - NN
neural net, artificial neural network, network of one or more layers of neurons connected by weighted links with adjustable weights, which takes input data and produces an output
neuron
primitive processing element which takes one or more input values and produces an output value by combining the input values and applying an activation function on the result
NIST AI Risk Management Framework (AI RMF)
A voluntary framework by the National Institute of Standards and Technology (NIST) providing adaptable guidelines across sectors for managing AI risks, covering governance, risk identification, assessment, mitigation, and mo
Non-Discrimienation Laws
Legislation prohibiting unfair treatment based on protected characteristics, applicable to AI systems to prevent biased outcomes in areas like employment or lending.
Notifiacation Requirements
Legal obligations requiring organizations to inform authorities or users about certain AI system deployments, especially those considered high-risk.
NPU
neural network processing unit
OECD
organisation for economic co-operation and development
OECD AI Principles
International guidelines by the OECD promoting innovative and trustworthy AI that respects human rights, including transparency, accountability, and complementing existing standards in areas like privacy and digital se
OECD Catalogue for Trustworthy AI
A resource offering tools and metrics to assess AI systems for trustworthiness, covering aspects like robustness, fairness, and transparency.
OECD Framework for AI Classification
A framework developed by the OECD to categorize AI systems based on characteristics and risks, part of broader efforts to promote trustworthy , AI and inform policy development and regulation.
Ongoing Risk Tracking
Continuously documenting and monitoring potential risks associated with AI systems to address them proactively as they evolve over time.
optical character recognition OCR
conversion of images of typed, printed or handwritten text into machine-encoded text part-of-speech tagging task of assigning a category (e.g. verb, noun, adjective) to a word based on its grammatical properties
Optionality and Redress
Providing users with choices regarding AI system interactions and mechanisms to challenge decisions or seek remedies for harms caused by AI.
Organizational Risks
Potential threats to a company from AI systems, including reputational damage, financial losses, and operational disruptions requiring proactive risk management.
Outcome-Focused Approach
Prioritizing desired results in AI governance by setting clear objectives and measuring performance against specific metrics to ensure goals are met.
parameter
model parameter
internal variable of a model that affects how it computes its outputs
performance
measurable result
Performance Monitoring
Regularly assessing an AI system’s outputs and efficiency to ensure it meets expected standards, identifying issues needing attention or adjustment for optimal performance.
Perpetual Learning
Enabling AI systems to continuously learn from new data and experiences, allowing adaptation and improved performance over time without retraining from scratch.
personally identifiable information - PII
personal data
any information that (a) can be used to establish a link between the information and the natural person to whom such information relates, or (b) is or can be directly or indirectly linked to a natural person
planning
computational processes that compose a workflow out of a set of actions, aiming at reaching a specified goal
Planning Phase
The initial stage in the AI development life cycle where goals, scope, and governance structures are defined to guide the project’s direction.
Policies and Regulations
Laws and guidelines governing AI jI development and use, established to mitigate risks and ensure ethical application of AI technologies across industries.
POS
part of speech
Post-hoc Testing
Evaluating an AI system s performance after deployment to identify biases, errors, or unintended consequences, ensuring ongoing reliability and fairness
predictability
property of an AI system that enables reliable assumptions by stakeholders about the output
prediction
primary output of an AI system when provided with input data or information
Privacy Laws
Legal regulations protecting personal data privacy, such as GDPR, requiring AI systems to handle data responsibly and respect individual rights.
Privacy and Data Governance Integration
Incorporating AI systems into existing data protection and privacy frameworks to ensure consistent policies and compliance with regulations.
Privacy and Ethical Considerations
Addressing data protection and ethical issues in AI development to safeguard user privacy and prevent discriminatory or harmful applications.
Privacy-Enhanced AI Systems
AI systems that integrate privacy- preserving techniques and privacy- enhancing technologies to ensure personal data is protected from unauthorized access or misuse throughout their operation.
Privacy-Enhancing Technologies (PETs)
Tools and methods enabling data analysis while protecting individual privacy, such as encryption, anonymization, differential privacy, and homomorphic encryption, increasingly critical in AI governance.
Privacy-Preserving Techniques
Strategies applied in AI to protect sensitive data during processing like federated learning and data minimization ensuring compliance with privacy laws and regulations.
procedural knowledge
knowledge which explicitly indicates the steps to be taken in order to solve a problem or to reach a goal
Product Safety Laws
Regulations ensuring that products, including those incorporating AI, are safe for consumer use, holding manufacturers accountable for defects or harms caused.
production data
data acquired during the operation phase of an AI system for which a deployed AI system calculates a predicted output or inference sample atomic data element processed in quantities by a machine learning algorithm or an AI system
Prohibited AI Systems
AI applications banned under certain regulations, like the EU AIAct, due to unacceptable risks, such as social scoring or real-time biometric surveillance.
Pro-Innovation Mindset
Encouraging creative exploration in AI jI development while ensuring ethical considerations, balancing advancement with responsibility.
Promoting Transparenacy and Trust
Enhancing public confidence in AI systems by openly sharing information about their functioning, decisions, and safeguards against risks.
Public Education
Informing the public about AI technologies, their benefits, and potential risks to enable informed decision-making and foster societal understanding.
question answering
task of determining the most appropriate answer to a question provided in natural language
Racial Equity
Ensuring fair treatment and equal opportunities for all races within AI systems, addressing biases that could lead to discrimination.
Readiness Assessment
Evaluating an organization’s preparedness to implement AI solutions, identifying gaps in resources, ‘ skills, or processes before deployment.
recurrent neural network - RNN
neural network in which outputs from both the previous layer and the previous processing step are fed into the current layer
Reinforcement Learning
A type of machine learning where an agent learns to make decisions by performing actions and receiving feedback through rewards or penalties.
reinforcement learning - RL
learning of an optimal sequence of actions to maximize a reward through interaction with an environment
relationship extraction
relation extraction
task of identifying relationships among entities mentioned in a text
reliability
property of consistent intended behaviour and results
Repeatability Assessments
Testing AI systems multiple times with the same inputs to ensure consistent outputs, verifying reliability and stability over time.
Research Ethics
Ethical principles guiding AI research, ensuring studies are conducted responsibly, with integrity, and without, causing harm to participants or society.
resilience
ability of a system to recover operational condition quickly following an incident
Responsible AI
TekstfeltDeveloping and deploying AI systems , in a manner that is ethical, transparent, and accountable, minimizing harm and promoting fairness.
Responsible AI Development and Use
Creating and utilizing AI technologies while adhering to ethical standards, legal requirements, and societal values to ensure beneficial outcomes, aligned with the OECD’s AI principles.
Responsible and Ethical Development
Designing AI systems with consideration for potential risks and impacts, aligning with human rights i and promoting societal good
retraining
updating a trained model by training with different training data
Right to Meaningful Information about AI Logic
Legal entitlement, often under laws like GDPR, for individuals to receive understandable explanations about automated decisions affecting them.
risk
effect of uncertainty on objectives
Risk Assessment
Systematically evaluating potential risks associated with an AI system, considering likelihood and impact to inform mitigation strategies.
Risk Identification
The process of detecting and documenting potential hazards or vulnerabilities in AI systems that could, lead to undesirable outcomes, providing a foundation for subsequent risk mitigation efforts.
Risk Management Framework
A structured approach for identifying, assessing, and mitigating risks associated with AI systems, integrating exampies like the NIST AI Risk Management Framework into organizational processes.
Risk Management Report
Documentation detailing identified AI risks, mitigation strategies, and procedures, providing transparency and accountability in risk management practices.
Risk Mitigation
Implementing strategies and measures to reduce or eliminate identified risks in AI systems, ensuring , they operate safely, comply with regulations, and align with ethical standards.
Risk-Centric Governance
An approach focusing on embedding risk management throughout the AI development lifecycle, prioritizing the identification and control of potential risks.
robot
automation system with actuators that performs intended tasks in the physical world, by means of sensing its environment and a software control system
Robotic Process Automation (RPA)
The design, construction, and operation of robots capable of performing tasks autonomously or semi-autonomously, often integrating AI for advanced functionality.
Robotics
Technology using software robots to, automate repetitive, rule-based tasks across various business processes, enhancing operational efficiency, reducing errors, and freeing up human workers for more complex activities.
robotics�
science and practice of designing, manufacturing and applying robots
Robustness
The ability of AI systems to perform reliably under diverse conditions, resisting errors or adversarial inputs to i ensure consistent and safe operation.
robustness
ability of a system to maintain its level of performance under any circumstances
Robustness Requirements
Standards ensuring AI systems maintain performance under diverse conditions, including unexpected inputs or attacks, enhancing reliability and trustworthiness
Roles and Responsibilities
Clearly defined duties assigned to individuals or teams overseeing AI development, deployment, and monitoring, ensuring accountability, and effective management.
Safety
Ensuring AI systems operate without causing harm to users or society, incorporating measures to prevent accidents or unintended consequences.
Safety Testing
Rigorous evaluation of AI systems to identify and mitigate potential safety risks before deployment, ensuring they meet safety standards and regulations.
Scope
The extent of data, resources, and expertise required for developing and deploying an AI system, defining project boundaries and objectives.
Secondary and Unintended Uses
Potential applications of AI systems beyond their original purpose, possibly leading to unforeseen risks or ethical concerns requiring mitigation.
Security
Protecting AI systems from unauthorized access, attacks, or manipulation, ensuring data integrity, and system reliability.
semantic computing
field of computing that aims to identify the meanings of computational content and user intentions and to express them in a machine-processable form
Semi-supervised Learning
A machine learning approach combining labeled and unlabeled data during training, enhancing learning efficiency when labeled data is limited.
semi-supervised machine learning
machine learning that makes use of both labelled and unlabelled data during training
Senior Leadership and Tech Team Buy-In
Securing commitment from top management and technical teams for AI initiatives, crucial for resource allocation and successfull, implementation.
sentiment analysis
task of computationally identifying and categorizing opinions expressed in a piece of text, speech or image, to determine a range of feeling such as from positive to negative
Singapore Model AI Governance Framework
Guidelines established by Singapore to promote responsible AI deployment, emphasizing accountability, transparency, and fairness, with the 2024 update focusing specifically on generative AI.
Societal Risks
Potential negative impacts of AI on society, such as undermining democratic processes, eroding public trust, or exacerbating social inequalities, necessitating careful governance.
Socio-Technical System
An approach that considers both social and technical aspects in AI development, acknowledging the interplay between technology and , human factors.
soft computing
field of computing that is tolerant of and exploits imprecision, uncertainty and partial truth to make problem-solving more tractable and robust
speech recognition
speech-to-text
STT
conversion, by a functional unit, of a speech signal to a representation of the content of the speech
speech synthesis
text-to-speech
TTS
generation of artificial speech
stakeholder
any individual, group, or organization that can affect, be affected by or perceive itself to be affected by a decision or activity
Stakeholder Engagement
Involving all relevant parties including employees, customers, and regulators in the AI project lifecycle to gather input and address concerns.
Stakeholders
Individuals or groups affected by or involved in an AI system, such as users, developers, regulators, and impacted communities.
Standardizing AI Terminology
Developing a common set of AI terms and concepts to ensure clear communication and understanding within and across organizations.
Strong/Broad AI
Artificial intelligence with human-level cognitive abilities, capable of understanding, learning and performing any intellectual task that a human can do.
subsymbolic AI
AI based on techniques and models that use an implicit encoding of information, that can be derived from experience or raw data.
Supervised Learning
A machine learning method where models are trained on labeled data, learning to predict outputs from inputs by finding patterns in the data.
supervised machine learning
machine learning that makes only use of labelled data during training
support vector machine - SVM
machine learning algorithm that finds decision boundaries with maximal margins
Sustainability
Developing AI systems that operate without depleting resources or harming the environment, ensuring long-term viability and minimal ecological impact.
Symbolic AI
An AI approach based on explicit rules and symbols, using logical reasoning and knowledge representation to mimic human intelligence and problem-solving.
symbolic AI
AI based on techniques and models that manipulate symbols and structures according to explicitly defined rules to obtain inferences
task
action required to achieve a specific goal
Tech Megatrends
Major technological developments like cloud computing, mobile computing, social media, IoT, and AR/VR that sipnificantly drive the evolution and adoption of Al.
Technology- Agnostic Approach
Developing AI governance practices adaptable to various technologies, ensuring policies remain relevant despite rapid changes in AI advancements, akin to a law-agnostic approach.
test data�
evaluation data - data used to assess the performance of a final model
Testing Innovative AI in Sandbox Environments
Assessing an AI model’s ability to generalize by evaluating its performance on new, previously unencountered data, ensuring robustness and reliability.
Testing with Unseen Data
Evaluating AI systems in controlled, isolated settings, known as regulatory and boxes, to assess performance and identify risks before full-scale deployment, particularly for AI robotics innovation
TEVV Process
A comprehensive process ensuring AI systems meet requirements and function correctly through systematic Testing, Evaluation, Verification, and Validation before deployment.
ThirdParty AI Risk Management
Establishing policies to manage risks associated with AI systems or services provided by external vendors, ensuring compliance and security.
ThirdParty Risks
Potential threats arising from external entities involved in the AI ecosystem, such as suppliers or partners, impacting security or compliance.
Threat Modeling
Identifying and analyzing potential threats to an AI system, assessing, vulnerabilities to inform risk mitigation, strategies and enhance security
trained model
result of model training
training
model training, process to determine or to improve the parameters of a machine learning model based on a machine learning algorithm by using training data
training data
data used to train a machine learning model
Transformer Models
Deep learning models using attention , mechanisms to process sequential data, revolutionizing natural language processing examples include BERT and GPT-3
Transparency - System
The quality of AI systems being open and understandable, allowing stakeholders to see how decisions are made, including transparency requirements for generative AI and disclosing Al-generated content.
Transparency - Organization
property of an organization that appropriate activities and decisions are communicated to relevant stakeholders in a comprehensive, accessible and understandable manner
Transparency Requirements for High-Risk Systems
Legal obligations for high-risk AI systems to disclose information about their operations, ensuring users can make informed decisions and facilitating accountability.
Transparent Governance
Implementing governance practices that openly communicate AI policies, procedures, and decision-making processes to stakeholders, fostering trust and ethical oversight.
trustworthiness
ability to meet stakeholder expectations in a verifiable way
Trustworthy AI System
An AI system that is reliable, fair, transparent, secure, and aligned with human values, earning users’ trust through ethical and responsible operation.
Underserved Communities
Groups historically lacking access to resources or opportunities, requiring consideration in AI development to avoid exacerbating inequalities.
Understanding AI System Failures
Analyzing reasons behind AI malfunctions or errors to improve system robustness, prevent recurrence, and enhance overall reliability.
Unfair and Deceptive Practices Laws
Regulations prohibiting businesses from engaging in misleading or unethical practices, applicable to AI systems to prevent consumer harm.
United States Executiove Order on AI (EO 13859)
A U.S. directive promoting trustworthy AI development, emphasizing research, collaboration, and regulatory, approaches, recently updated by EO 14110 focusing on the safe and secure development of AI.
Unsupervised Learning
A machine learning method where models learn patterns from unlabeled data, identifying structures or, relationships without predefined outputs, useful for clustering and anomaly detection.
Unsupervised Learning Models
AI models trained on unlabeled data to I discover hidden patterns or intrinsic structures without predefined outputs, useful for tasks like clustering and anomaly detection
unsupervised machine learning
machine learning that makes only use of unlabelled data during training
Validation
Assessing an AI model’s performance against real-world data to ensure it meets required specifications and can generalize beyond training data.
validation
confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled
validation data
development data data used to compare the performance of different candidate models
verification
confirmation, through the provision of objective evidence, that specified requirements have been fulfilled
Virtual Reality (VR)
Technology creating immersive computer-generated environments, allowing users to experience and interact with simulated worlds through I sensorystimuli.
Weak/Narrow AI
AI designed to perform specific tasks without consciousness or self- awareness, lacking general intelligence examples include voice assistants or recommendation systems.
Corpus
Corpus is a large collection of texts or data, and AI uses this corpus. AI learns from it in order to discern patterns, and then based on those patterns, based on that experience, the algorithm will make predictions.
Inference
Inference is a machine learning or ML model’s output.
For example, the decision or prediction if you think about a food delivery app, for example, on your phone, if you happen to access that delivery app at a similar or the same time every week, perhaps
on Friday, late afternoon or early evening, then it’s going to learn over time that, hey, this person is interested in getting some food delivered on Friday evenings.
And with that data, you may be targeted for specific advertising or deals from your favorite restaurants, or even from the delivery app itself.
Labeled data
Labeled data - Labeled data is data with labels, these labels can also be tags or classes.
The four major machine learning training models
The four major machine learning training models are supervised, unsupervised, semi-supervised and reinforcement.