Module 4 Flashcards

Use AI Responsibly

1
Q

What is responsible AI?

A

It is the principle of developing and using AI ethically with the intent of benefiting people and society, while avoiding harm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What can affect the output from an AI tool?

A

The output from an AI tool can be affected by both systemic bias and data bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is systemic bias?

A

A tendency upheld by institutions that favors or disadvantages certain outcomes or groups, existing within societal systems like healthcare, law, education, and politics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is data bias?

A

Data bias occurs when systemic errors or prejudices lead to unfair or inaccurate information, resulting in biased outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is allocative harm?

A

An AI system’s use or behavior that withholds opportunities, resources, or information, affecting a person’s well-being.

Example: Misidentification in tenant screening leading to lost opportunities and financial loss.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is quality-of-service harm?

A

An AI system’s performance that is significantly worse for certain groups of people based on their identity.

Example: Speech recognition technology failing for people with disabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is representational harm?

A

An AI system’s reinforcement of the subordination of social groups based on their identities.

Example: Gender-biased translations in language translation apps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is social system harm?

A

Macro-level societal effects that amplify existing disparities or cause physical harm due to AI development or use.

Example: Deepfakes influencing elections and public opinion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is interpersonal harm?

A

The use of technology to create a disadvantage for certain people, negatively affecting their relationships or sense of self.

Example: Sharing private information that leads to surveillance or loss of agency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a deepfake?

A

AI-generated fake photos or videos of real people saying or doing things they didn’t do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is privacy in the context of AI?

A

The right for a user to control how their personal information and data are collected, stored, and used by AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is security in the context of AI?

A

The act of safeguarding personal information and private data to prevent unauthorized access and ensure the system is secure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are 3 measures needed to protect privacy and security?

A
  1. Be Aware of Terms of Use and Privacy Policies
  2. Avoid Inputting Personal or Confidential Information
  3. Stay Updated on the Latest Tools and Security Strategies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is drift in AI models?

A

The decline in an AI model’s prediction accuracy due to changes over time not reflected in the training data, which is commonly caused by knowledge cutoff.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is knowledge cutoff in AI?

A

That is when a model is trained up to a certain point in time and lacks knowledge of events or information after that date.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can drift affect AI model reliability?

A

Drift can make an AI model less reliable due to biases in new data, changes in behavior, and major events affecting the model.

17
Q

What is likely to increase bias in AI systems?

A

Training or developing models with incomplete, outdated, or inaccurate data