WEek 5: deepfakes and the challenge to truth Flashcards

1
Q

What are deepfakes?

A

refers to machine learning techniques that can be used to produce synthetic but realistic looking and/or sounding video, audio or image files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

(De Ruiter) What is the main argument of her article?

A

deepfakes are morally suspect, but not automatically morally wrong.
Deepfake technology, used to create video or audio footage, is morally suspect because it allows users to portray people as realistically saying or doing things they did not necessarily do or say.

She argues that the moral evaluation of specific deepfakes depends on whether the represented person(s) would object to the way in which they are represented, on whether the deepfake deceives viewers, and on the intent with which the deepfake was created.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

(De Ruiter) When is deepfake morally wrong?

A

The key factor that renders deepfakes morally wrong is the use of digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

(De Ruiter) Why can this link to deception be perceived as rendering this technology intrinsically morally problematic? And is it always deceptive?

A

“Deception violates norms of truthfulness and risks undermining people’s autonomy and ability to pursue actions in line with their own will.”

Yet, deepfakes do not always bring about deception. For example, we would not say the deepfake for the Malaria Must Die initiative is effectively deceptive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(De Ruiter) What is the close link of deepfakes with deception?

A

aim is to produce artificial footage that seems convincingly real.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

(De Ruiter) What are the two contexts in which deepfake technology is particularly relevant?

A

when people are unable to appear in footage as saying or doing certain things;

when they are unwilling to appear in footage as saying or doing certain things.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

(De Ruiter) When can deepfakes be harmful or beneficial?

A

Deepfake technology is harmful when it is used to represent persons in ways in which they do not wish to be portrayed, but can be beneficial when it helps people present themselves in ways they would like to portray themselves but are unable to otherwise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

(De Ruiter) What is the connection with agency and vulnerability?

A

This entails that deepfake technology can be used to enhance the agency of people and decrease their vulnerability (by giving greater means for self-representation), but also render them more vulnerable and instill a sense of powerlessness (because people can misrepresent us without us being able to do much about it).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

(De Ruiter) Why do non-consensual deepfakes strike at the heart of our sense of self?

A

because we experience representations of our body as representations of ourselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

(De Ruiter) What is violated in non-consensual deepfakes? And what is the impact on agency and vulnerability?

A

people’s right to digital self-representation

This right entails that others may not manipulate digital data that represent people’s image and voice, as markers of the self, in hyper-realistic footage that presents them in ways to which they would reasonably object.

This right could enhance the agency of people by granting them a say in how they may be virtually represented and reduce their vulnerability by limiting their susceptibility to harmful representations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

(Rini) What does she argue? What is she concerned about?

A

that we should consider how deepfakes may undermine norms regulating testimony.
She is concerned that with the advent of deepfakes, recordings may loose their function as an epistemic backstop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

(Rini) What does she mean by an epistemic backstop?

A

Epistemology = theory of knowledge (What can know? How
can we know it?)

Backstop = something at the back serving as a stop or support

Therefore, an epistemic backstop is something we can depend on to support our (trust in) (shared) knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

(Rini) Why is this epistemic backstop relevant in the public domain?

A

BEcause we now assume testimonies as reliable
Recordings are likely to loose this backstop function if deepfakes become widely distributed and people become aware that they cannot automatically trust recordings that seem convincingly real.

When people come up with recordings as evidence, this may be deepfaked. This undermines general trust in recordings.

Also: People can deny the veracity of real recordings, claiming they are deepfakes.

If machine-learning techniques reach the point that real and artificial material can no longer be distinguished, we enter into ‘epistemic chaos’ .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

(Rini) What implications does Rini’s argument have for people’s vulnerability and agency?

A

Imagine what this means for court cases where the veracity of recordings as evidence can less easily be assumed or epistemic injustice against members of marginalised groups for whom the rise of smartphones allowed for easy access to recordings that could provide evidence of mistreatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

(Effron and Helgason) What do they highlight?

A

highlight the distinction between believing and condoning misinformation. Even if people may not believe misinformation, they may still support the underlying message.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

(Effron and Helgason) What do they argue?

A

that three psychological factors encourage people to condone misinformation: partisanship, imagination and repetition.

16
Q

(Effron and Helgason) When do people tend to fail to morally condemn msinformation?

A

when they may have the feeling that the gist of the falsehood is true, when they are invited to imagine alternative realities where the falsehood would be true (in a counterfactual) or can still turn out to be true (prefactual), or when they are repeatedly exposed to it.

17
Q

What do the arguments of Rini and Effron&Helgason have to do with the concept of truth as argued by Copson?

A

(Human nature strives towards truth and human life is better when people individually and together look for better ways of understanding the world.)

18
Q

When can deepfakes be used to enhance the agency of people?

A

(by giving greater means of self-representation).
This is the case when people are willing but unable to represent themselves in footage in particular ways, for example, due to disease, impairments, disabilities, distance or death.
Deepfake technology could also be used, for example, to visually represent oneself in a convincing way as a person with a different gender, which may be beneficial for persons who are seeking to determine their gender identity or prepare for a gender transition process.

19
Q

When can deepfakes be used to decrease people’s agency?

A

by instilling a sense of powerlessness over how others can present digital data that represent us, even in intimate and disturbing ways.

Deepfakes can also destabilise our capacity for intentional action by undermining trust in the reliability of testimonial knowledge on which we base at least part of our understanding of the world.

Deepfake technology exposes people to heightened and new forms of psychological and social harm, related to one’s self-image and reputational costs from being virtually misrepresented.

Deepfakes may be particularly pernicious in this regard because what is seen cannot be unseen.

20
Q

What are some negative sides of deepfakes?

A

Deepfakes can lead to political instability, apathy in the face of apparent evidence of human rights violations, and allow politicians to evade accountability by ascribing actions or speech to deepfakes.
Importantly, it seems that deepfakes require a more fine-grained analysis where not ‘humanity’ as such is the unit of analysis, but we pay attention to social groups that are particularly vulnerable to the harms of deepfakes.
Deepfakes are likely to harm particularly those who are already marginalised and stigmatised:
Deepfake technology offers new means to perpetuate sexual violence against women, transgender individuals and other victims.
The loss of trust in recordings affects particularly marginalised groups whose testimony is not considered worth as much as those of dominant groups through epistemic injustice.

21
Q

What is a positive effect of deepfakes ?

A

Deepfake technology can have positive effects, for example, with people with ALS whose voice can be ‘restored’. This may improve recognition of their agency, although it does reflect a logic of needing to adhere to conventions of what it means to have a human voice.

Awareness of vulnerabilities may be heightened by using deepfakes for educational purposes.