Module 2 Flashcards
Describe harms in the context of AI
Harms from use of an AI system to a person’s civil liberties, rights, physical or psychological safety or economic opportunity
What should we look at when trying to identify bias in a system?
We are looking at the training of the model and the inputs and outputs that occur
List 5 different ways bias can happen
- Implicit bias
- Sampling bias
- Temporal bias
- Overfitting to training data
- Edge cases and outliers
Describe the result of implicit bias in AI
Discrimination or prejudice toward a particular group or individual (often unconscious)
Describe the result of sampling bias in AI
Data gets skewed toward a subset of the group, so it may favour a subset
Describe the result of temporal bias in AI
Based on time, happens when a model is trained and it works now, but it may not work well later
Describe the result of overfitting to training data in AI
The model works for the training data but doesn’t work for any new data that comes in
Describe the result of edge cases and outliers in AI
The model is unpredictable because of data that are outside the boundaries of the training dataset
Provide 2 examples of edge cases
- Errors, when the data is not correct or duplicated or not needed
- Noise, data that negatively impact the machine learning of the model
What are outliers?
Data points outside the normal distribution of the data which can affect how the model operated and how effective it is
Describe the issue Amazon discovered in one of their AI systems
- They wanted to implement an AI system to help them in recruiting and hiring
- In training, they used test data from resumes of just men, not women
- All women were judged as unqualified
- They decided they were not able to get the model to work the way they wanted, and they abandoned the project
Describe how bias can happen in facial recognition systems
- The ability to match 2 images of the same person will vary from one demographic group to another
- So, it is not a very reliable system to use for facial recognition
List 6 potential areas where discrimination can impact an individual
- Employment and hiring
- Insurance and social benefits
- Housing (tenant selection or qualifying for a mortgage)
- Education (selection)
- Credit (lending)
- Differential pricing of goods and services
Provide an example of bias that is ok
Loaning less money to someone based on income
What have studies found in relation to discrimination of women in facial recognition systems
Females are much harder to recognize than males
Provide an example of a facial recognition that caused issues in law enforcement
The London police face recognition system once had an 81% inaccuracy rate
Describe deidentification
Removing identifiers from data (name, address, SIN…)
What are the risks of using deidenfied data in an AI system
- It may not completely deidentify it, particularly if you combine the dataset with other data
- With AI systems, it is easy to recombine data and reidentify individuals as they often use multiple datasets
What are the risks of appropriating personal data for model training
- Data is often sourced from social media or large datasets with data about individuals
- Individuals may not have consented to this non-intended use
What is AI inference?
An AI system model that makes predictions or decisions
What are the risks of AI inference?
If you have data that is mis-identified, you may be using someone else’s data to make a decision
How can you correct a lack of transparency in AI systems?
Have notices whereby individuals know they are interacting with AI
How can inaccurate data in AI models cause privacy issues?
- AI systems are only as good as the data that trains and that is used with them
- Models may become more inaccurate due to drift over time
List 3 ways AI can affect employment
- Job loss due to AI doing a job that was previously done by a human
- Job loss through AI-driven discriminatory hiring practices
- Job opportunities may fail to reach key demographics due to AI-driven tools that have bias built-in
What are group harms?
Harm to a group such as discrimination against a population subgroup
In what areas can group harms occur?
- Mass surveillance – potential harm, especially for marginalized groups
- Harms to freedom of assembly and protest due to tracking and profiling
Why are group harms so dangerous?
Potential of deepening racial and socio-economic inequities and increasing mistrust amongst groups
What is a deep fake?
Audio, video or images altered to portray a different reality