PHIL11-13 Flashcards

1
Q

AI and DS do what with information?

A

Collect, store, disseminate and process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What can be a consequence of surveillance?

A

Might have a moderation effect on behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is privacy?

A

Questions on how to govern flow of information: who may have it, what can they do with it?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Informational privacy

A

Individuals ability to control and limit extent to which and the way in which their information is x

Threatened when they have inadequate knowledge or insufficient influence over amount or nature of information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is decisional privacy?

A

Individuals ability to make decisions on their own without interference or surveillance.

Threatened when: others are able to influence their decisions or simply take inappropriate levels of interest in those decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is local privacy?

A

Ability to be alone and have a space where we can be “ourselves”

Threatened when: Lack of control over one’s own environment, or lack of anonymity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What types of privacy are there?

A

Decisional privacy
Local privacy
Informational privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is privacy important?

A

Instrumental importance: Helps us achieve other values like autonomy and security

Intrinsic importantie: it is a value in itself like physical security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Threats to privacy

A

Data leaks, tracking personalization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is tracking and why is it a threat

A

Systematically collecting information about a user

Informational threat, user might behave differently if they suspect tracking, decisional threat, follows users through digital space threat to local privacy (never “truly” alone and free to be themselves)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is personalization and why is it a threat?

A

Practice of adapting system or interface to users choices, interests and features

Requires detailed knowledge of user, acquisition of this threatens informational privacy. Used to influence actions threats decisional privacy. If the environment becomes too personalized users are prevented from experiencing something new and foreign - limits extent to be themselves threat to local privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Problem with privacy threats

A

It might halve good reasons and good outcomes for individual, others. So the question is: How much privacy should we be willing to give up

— value conflict

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is notice & consent

A

To resolve value conflicts you can ask individuals to resolve it themselves. Empowers individual to make informed choices on their information.

Notice: Given adequate explanation about which and how much information would be x

Consent: Individuals must approve this as described

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Problems with notice

A

Privacy policies are often too long and complex, frequently changed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Problems with consent

A

“Take it or leave it”, opt out is not the same as free choice. Users tend to go for minimal effort, agreeing to a policy they have not read.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the transparancy paradox?

A

If the policy is simple enough to be understood by a layperson, it will not truthfully represent x.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a contextual approach ?

A

Mistake that we assume the problem of ensuring privacy in AI and DS is completely new and different.

Norms and regulations already exist to ensure privacy within many different contexts (Patient rights, freedom of information and research)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What does Nissenbaum think on Notice and Consent?

A

Norms and regulations that are designed to ensure privacy in specific contexts should continue to do so even if technologies are new.

If necessary these norms and regulations should be adapted to include new types of information but as a default this information should be considered just as worthy of protection as other information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Questions arising from contextual approach

A

Does AI and Ds methods brought any fundamentally new contexts for which no privacy protecting norms and regulations exist yet?

20
Q

What happens if such new contexts exist?

A

then “the theory of contextual integrity directs us beyond existing norms to underlying standards,
derived from general moral and political considerations.”

Consequentialist evaluation: Cost of privacy violations must be balanced against gains of these violations (security, efficiency)

Deontological analysis: Duty we have to respect someone’s privacy in context balanced with duty to protect e.g. physical security

21
Q

What aer advantages and disadvantages of the contextual approach?

A

Advantage: Not overburden individuals and rely on existing norms

Disadvantage: If technology brings new context, facing task of developing new norms, operationalizing and measuring privacy or balancing duties

22
Q

What is the problem of transparency?

A

lack of relevant knowledge about the entire AI ecosystem & the context in which a particular model was developed.
- Requires a range of different solutions

23
Q

What is the problem with explainability?

A

Problem: lack of knowledge about how and why the AI system itself works.
-> Often the target of technical solutions

24
Q

Why do we need transparency? Practical

A

It is means to an end: It increases trust (likelihood technology will be accepted and trusted), facilitates accountability (who to blame when something goes wrong), benefits development: ability to fix and improve technology

25
Q

What is the threat of opacity?

A

Opacity threatens control and oversight over these models. Unclear whether AI systems follow/protect values like privacy etc. Unclear whether system was used for decision.

-> In the end, AI systems should benefit us but this is threatened by opacity

26
Q

Stakeholders in the DS ecosystem can be distinguished by different things, what are these?

A

Prior knowledge and abilities (sources of opacity), values and roles (effects of opacity)

27
Q

How does opacity arise?

A

Technical illiteracy (affects data subjects, decision subjects, operators and executors). May not have educational background to understand the way the decision is made

Intentional secrecy (affects data subjects, decision subjects). Dependent on knowledge provided by operators, executors and creators but these may restrict access to protect trade secrets and avoid gaming the system.

System complexity: Affects all stakeholders. Development ML for instance.

28
Q

What are effects of opacity on data subjects?

A

Mistrusting technology: loan applicants want to know how the system woks to understand how their data will be used stored and processed, otherwise don’t want the system to use it.

29
Q

What are the effects of opacity on decision subjects?

A

They want the decisions to be justified and actionable. Need to know why a decision has been reached, how to act different to effect different decisions in the future — restricts autonomy (should I earn more money? Get married?

30
Q

What are the effects of opacity on creators?

A

They want to develop maintain and repair the system. Need to know how it works to increase performance, identify bugs and debug.

Opacity obstructs innovation and design

31
Q

Effects of opacity on operator/executor

A

They want to make justified data-driven decisions. Need to know the reasons why an output has been generated to identify flauws and adjust before making decisions

Prevents effective decision making

32
Q

Effects of opacity on examiners

A

Want to ensure the decisions are justified. Prevents regulatory oversight.

33
Q

How to achieve transparency

A

If due to system complexity: Explanations based on mathematical or computational analysis
If due to digital illiteracy: Education
Regulations to mandate insights, explanations. Perhaps technical solutions to inner workings.
Explainable AI - use inherently interpretable systems, apply post-hoc analysis to explain behavior (decision tree)

34
Q

Concerns with gaining transparency

A

Is there an accuracy/interpretability trade-off?

How interpretable are they really? Does it help to use decision tree if nodes can’t be mapped onto meaningful concepts?

35
Q

What is post hoc analysis?

A

Deploys mathematical, statistical, computational tools to extract meaningful information about an opaque system and answer questions on why they do what they do or how the work. Might tell creators, executors or examiners that a system is forcing on certain features rather than others.

Or to decision subjects an story about why a certain decision was reached

36
Q

Challenges for post hoc analyses

A

How should accuracy be balanced with interpretability?
How robust are these methods? Do they yield the same explanations every time?
Can we use these methods to not only indicate surface features but also hidden features? Causally relevant variables?

37
Q

Why does bias arise and what types of data bias are there?

A

The data itself might be biased.

Historical bias (gender imbalance)
Representation bias (WEIRD populations)
Measurement bias (not everyone is measured equally)

38
Q

What are types of algorithmic bias?

A

Biased objectives Learning algorithm optimizes biased function
Confirmation bias: Encoding a biased algorithm because it confirms the developers own biases

39
Q

What is fairness

A

No matter the reasons, biased decision making on the basis of immutable and irrelevant characteristics is considered unfair. “Fairness” is difficult to define:

Consequentialists: Good actions are those that have good consequences
Deontological: Good actions are those that accord with relevant norms (but which norms are those)

40
Q

Fairness from consequentialist perspective

A

Equal impact: Psychological, economic and other empirical investigations into a decisions long-term effects for different members of population

41
Q

Measuring equality

A

Statistical parity: All groups have the same overall positive/negative classification rates
Accuracy equity: Accuracy is the same for all groups
Disparate mistreatment: False positive rates are the same in all groups

42
Q

Fairness from a Deontological perspective

A

Equal treatment: Intentions behind, and criteria for a decisions. Unfair when reference is made to protected classes features such as genders etc.

Determining whether decision is fair needs to consider process or critters by which decision are made. Which features does the algorithm consider? Was a code of conduct applied and followed/

43
Q

Criteria which should not be considered in a decision

A

Immutable characteristics: Gender, race, sexual orientations.
Proxy attributes: Reading habits, friend networks, home addresses
AI- defined attributes Operating system, click behavior or response time

44
Q

Problem determining process of decision making

A

It’s a cooperative process between people and algorithms. Responsibility problem. Opacity causes us to not know why a decision is made.

45
Q

Promoting fairness in AI

A

Pre-processing strategies: Modify dataset, include more (diverse) examples, review the way data is labeled, identify, reweight or exclude problematic input elements

In-processing strategies: Change learning algorithm, replace with bias-mitigating alternative. Punish illegitimate bias.

Post-processing: Fairness audit, use output with care, oversight or not at all

46
Q

Remaining challenges of bias

A

Deep social and historical roots. Raw data can be difficult to interpret, perhaps too hard to identify or mitigate data bias. Can algorithmic bias be completely avoided? Programmers will always be involved at some point.