WEek 5: deepfakes and the challenge to truth Flashcards
What are deepfakes?
refers to machine learning techniques that can be used to produce synthetic but realistic looking and/or sounding video, audio or image files.
(De Ruiter) What is the main argument of her article?
deepfakes are morally suspect, but not automatically morally wrong.
Deepfake technology, used to create video or audio footage, is morally suspect because it allows users to portray people as realistically saying or doing things they did not necessarily do or say.
She argues that the moral evaluation of specific deepfakes depends on whether the represented person(s) would object to the way in which they are represented, on whether the deepfake deceives viewers, and on the intent with which the deepfake was created.
(De Ruiter) When is deepfake morally wrong?
The key factor that renders deepfakes morally wrong is the use of digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed.
(De Ruiter) Why can this link to deception be perceived as rendering this technology intrinsically morally problematic? And is it always deceptive?
“Deception violates norms of truthfulness and risks undermining people’s autonomy and ability to pursue actions in line with their own will.”
Yet, deepfakes do not always bring about deception. For example, we would not say the deepfake for the Malaria Must Die initiative is effectively deceptive.
(De Ruiter) What is the close link of deepfakes with deception?
aim is to produce artificial footage that seems convincingly real.
(De Ruiter) What are the two contexts in which deepfake technology is particularly relevant?
when people are unable to appear in footage as saying or doing certain things;
when they are unwilling to appear in footage as saying or doing certain things.
(De Ruiter) When can deepfakes be harmful or beneficial?
Deepfake technology is harmful when it is used to represent persons in ways in which they do not wish to be portrayed, but can be beneficial when it helps people present themselves in ways they would like to portray themselves but are unable to otherwise.
(De Ruiter) What is the connection with agency and vulnerability?
This entails that deepfake technology can be used to enhance the agency of people and decrease their vulnerability (by giving greater means for self-representation), but also render them more vulnerable and instill a sense of powerlessness (because people can misrepresent us without us being able to do much about it).
(De Ruiter) Why do non-consensual deepfakes strike at the heart of our sense of self?
because we experience representations of our body as representations of ourselves.
(De Ruiter) What is violated in non-consensual deepfakes? And what is the impact on agency and vulnerability?
people’s right to digital self-representation
This right entails that others may not manipulate digital data that represent people’s image and voice, as markers of the self, in hyper-realistic footage that presents them in ways to which they would reasonably object.
This right could enhance the agency of people by granting them a say in how they may be virtually represented and reduce their vulnerability by limiting their susceptibility to harmful representations.
(Rini) What does she argue? What is she concerned about?
that we should consider how deepfakes may undermine norms regulating testimony.
She is concerned that with the advent of deepfakes, recordings may loose their function as an epistemic backstop.
(Rini) What does she mean by an epistemic backstop?
Epistemology = theory of knowledge (What can know? How
can we know it?)
Backstop = something at the back serving as a stop or support
Therefore, an epistemic backstop is something we can depend on to support our (trust in) (shared) knowledge.
(Rini) Why is this epistemic backstop relevant in the public domain?
BEcause we now assume testimonies as reliable
Recordings are likely to loose this backstop function if deepfakes become widely distributed and people become aware that they cannot automatically trust recordings that seem convincingly real.
When people come up with recordings as evidence, this may be deepfaked. This undermines general trust in recordings.
Also: People can deny the veracity of real recordings, claiming they are deepfakes.
If machine-learning techniques reach the point that real and artificial material can no longer be distinguished, we enter into ‘epistemic chaos’ .
(Rini) What implications does Rini’s argument have for people’s vulnerability and agency?
Imagine what this means for court cases where the veracity of recordings as evidence can less easily be assumed or epistemic injustice against members of marginalised groups for whom the rise of smartphones allowed for easy access to recordings that could provide evidence of mistreatment.
(Effron and Helgason) What do they highlight?
highlight the distinction between believing and condoning misinformation. Even if people may not believe misinformation, they may still support the underlying message.