Disinformation, Democracy Flashcards

1
Q

What is the difference between misinformation and disinformation?

A

Misinformation is false information shared without intent to deceive, while disinformation is deliberately misleading information intended to manipulate or harm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does disinformation affect democracy?

A

It erodes trust in institutions, manipulates public perception, and destabilizes democratic systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a deepfake?

A

A deepfake is an AI-generated video that mimics a real person, often used maliciously to deceive viewers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why are women in politics particularly vulnerable to deepfake attacks?

A

Deepfake technology exploits misogyny, targeting women to discourage them from participating in public life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What societal group is most vulnerable to deepfakes?

A

Older voters and people with limited technological literacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Name one solution to combat the misuse of AI in deepfakes.

A

Embedding ethics into AI development and implementing regulations for platforms spreading such content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is astroturfing?

A

It is the creation of fake grassroots campaigns, often using AI-generated content to simulate consensus by generating human-like, nuanced content at scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can subscription-based models reduce AI-driven disinformation?

A

By charging for accounts, platforms make it costlier for bots to operate at scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why are AI-generated fake opinions a threat to democracy?

A

They crowd out genuine discourse, undermine trust in online interactions, and distort public opinion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do social media platforms struggle with AI-generated disinformation?

A

Platforms find it difficult to differentiate AI content from human-generated content at scale, leading to both under- and over-enforcement issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What regulatory solution could reduce AI’s impact on disinformation?

A

Implementing Pigouvian taxes on bot-generated content or switching to subscription-based social media models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can media literacy help combat AI-generated fake news?

A

By equipping individuals to critically evaluate content and discern truth from fabrication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is content provenance?

A

A system that uses metadata to trace how AI was used in content creation, enhancing transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the role of detection tools in combating disinformation?

A

They identify and flag AI manipulations, helping to discern real from fake content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is detecting deepfakes challenging?

A

Technical advancements make deepfakes increasingly realistic, and detection tools often fail on low-quality or manipulated content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How are deepfakes used maliciously in politics?

A

By creating fake statements or actions by politicians, undermining trust and distorting public perception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Who are the key groups that need access to detection tools?

A

Journalists, community leaders, election officials, and human rights defenders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Name one structural solution to counter AI-generated disinformation.

A

Embedding accountability and transparency into AI development pipelines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the main argument against AI’s impact on elections?

A

AI’s influence on elections is minimal, and deeper societal issues like voter suppression pose greater threats.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why might AI-generated content fail to influence voters?

A

Voter behavior is shaped by complex factors like identity and values, limiting the impact of AI-driven persuasion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Why is mass persuasion by AI challenging?

A

AI-generated content struggles to cut through the noise of daily information and is often met with skepticism by voters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Name a bigger threat to democracy than AI, as highlighted in the article.

A

Voter suppression, intimidation, and political violence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How might an overfocus on AI harm democracy?

A

It can distract from addressing systemic issues that imperil democratic processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the proposed shift in focus to protect democracy?

A

Addressing deeper issues like voter disenfranchisement and political oppression instead of solely targeting AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How can AI act as an educator in democracy?

A

AI tools (e.g. interactive chatbots) can teach citizens about political issues, candidates, and policies, enhancing political literacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What risks does AI as a propagandist pose?

A

It can create and distribute disinformation at scale, undermining trust in democratic systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What role could AI play in moderating online discussions?

A

It could ensure inclusivity, highlight agreements, and block hateful or off-topic comments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is one potential risk of AI acting as a political proxy?

A

It could disengage individuals from actively understanding and participating in democracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

How could AI improve the legislative process?

A

By drafting legislation, analyzing complex legal interactions, and identifying loopholes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Name one way AI could undermine democracy.

A

By acting as a propagandist, spreading disinformation or polarizing content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the societal impacts of false and misleading content online?

A

It contributes to political polarization, reduces trust in democratic institutions, and negatively impacts fundamental human rights.

32
Q

What is the main objective of the OECD Truth Quest Survey?

A

To measure the ability of individuals across 21 countries to identify false and misleading content online.

33
Q

What are the four main areas of focus for measuring false and misleading content online?

A

Content and circulation, origin (human vs. AI), user behavior, and perceptions.

34
Q

How does gamification benefit surveys on misinformation?

A

It increases engagement and simulates real-world conditions, improving data quality.

35
Q

What is the OECD taxonomy?

A

A framework categorizing false content into disinformation, misinformation, contextual deception, propaganda, and satire.

36
Q

OECD Truth Quest Survey: What is the average accuracy score across countries in identifying false content?

A

Respondents correctly identified the veracity of content 60% of the time.

37
Q

How does AI-generated content compare to human-generated content in terms of detection?

A

AI-generated content is easier to identify as true or false compared to human-generated content.

38
Q

OECD Truth Quest Survey: What demographic trends were observed in the survey?

A

Younger individuals are more confident but less accurate, while older individuals perform better.

39
Q

OECD Truth Quest Survey: What policy measures are suggested based on the findings?

A

Enhanced media literacy programs and the use of AI labeling to reduce misinformation.

40
Q

Why is ongoing research important in combating false and misleading content?

A

To adapt to the rapidly evolving digital landscape and emerging misinformation challenges.

41
Q

OECD Truth Quest Survey: How does the study propose using AI labels?

A

As a tool to signal content origin and potentially mitigate the spread of false information.

42
Q

What does “downstream” AI focus on in democracy?

A

Using AI to enhance democratic practices, such as leadership coaching or communication analysis.

43
Q

How does the “upstream” approach influence AI?

A

It focuses on designing AI systems with metrics that align with human agency and democratic values.

44
Q

What is the “dam approach” in regulating AI?

A

Establishing structures and policies to prevent AI from exacerbating power imbalances.

45
Q

What key question should guide AI’s development in democracy?

A

How can we use AI as a means to achieve human values and collective goals?

46
Q

Why is organizing essential in the age of AI?

A

It centers human agency and builds collective power to align AI’s use with democratic values.

47
Q

What does “hacking democracy” mean in the context of threatening democracy?

A

It refers to AI steering public opinion through misinformation and polarizing tactics to influence democratic outcomes.

48
Q

What structural solution can protect democracy from AI manipulation?

A

Citizen assemblies insulated from external influences can provide informed and unbiased decision-making.

49
Q

How does AI erode trust in democracy?

A

By spreading extreme content and undermining the credibility of institutions and democratic processes.

50
Q

What should be the focus of AI reforms to safeguard democracy?

A

Protecting collective decision-making and ensuring truth and fairness in political engagement.

51
Q

How can digital literacy combat AI disinformation?

A

By equipping individuals to critically assess and verify content before believing or sharing it.

52
Q

What is the “picture superiority effect”?

A

What we got presented in a visual form is more impactful on our attitude than what we read in text.

53
Q

What is democracy?

A

A system of government by the whole population or all the eligible members of a state, typically through elected
representatives

54
Q

What is Artificial Intelligence

A

Computers and machines to simulate human learning, comprehension, problem
solving, decision making

55
Q

What are challenges to democracy from AI?

A
  • Disinformation and Propaganda
  • Surveillance and Privacy
  • Election Manipulation
  • Algorithmic Bias and Discrimination
  • Concentration of Power
56
Q

How can disinformation and propaganda challenge democracy?

A
  • AI-powered bots can amplify false narratives, sow division, and manipulate public opinion.
  • Algorithms on social media platforms can prioritize sensational or polarizing content, potentially
    undermining reasoned debate.
57
Q

How can surveillance challenge democracy?

A
  • AI technologies like facial recognition and predictive analytics can enable mass surveillance, threatening democratic freedom and privacy.
  • Authoritarian regimes might use AI to suppress dissent and control populations.
58
Q

How can election manipulation challenge democracy?

A
  • AI tools can micro-target voters with tailored political ads, sometimes based on misleading or manipulative information.
  • Automated misinformation campaigns can disrupt fair elections.
59
Q

How can algorithmic bias and discrimination challenge democracy?

A
  • Biases in AI systems can disproportionately harm marginalized groups, exacerbating social inequalities and undermining the democratic ideal of equal treatment.
60
Q

How can concentration of power challenge democracy?

A
  • The development and deployment of AI are often controlled by a few powerful corporations and governments, potentially centralizing power and influence over democratic societies.
61
Q

What are the three types of information disorders according to Wardle and Derakhshan (2017)?

A
  • Misinformation: false information without harmful intent (false connection, misleading content)
  • Disinformation: false information shared with harmful intent (false context, imposter / manipulated / fabricated content)
  • Malinformation: true information shared maliciously (leaks, harassment, hate speach)
62
Q

What is the main distinction between misinformation and disinformation?

A

The agent’s intent to harm or profit in disinformation, which is absent in misinformation.

63
Q

What are deepfakes, and why are they significant?

A

Deepfakes are AI-generated realistic but false media (audio, images, videos) that can manipulate public opinion and spread disinformation.

64
Q

What is the OECD taxonomy of false/misleading online content? (Lesher, Pawelec and Desai, 2022)

A

4 quadrants, axes are:
no intent to harm -> intent to harm
no active fabrication -> fabrication
* fabrication, no intent to harm: satire (creator)
* fabrication, intent to harm: disinformation, propaganda
*no fabrication, no intent to harm: misinformation (spreader)
* no fabrication, intent to harm: contextual deception (spreader)

65
Q

What is the EU definition of disinformation? (2021)

A

Verifiably false, inaccurate or misleading information intentionally designed, presented and promoted to cause public harm or make a profit.

66
Q

What are tools to spread disinformation?

A
  • Social Engineering: Bots, Troll Farms, Astroturfing (deceptive practice of hiding the sponsors of an orchestrated message)
  • Generative Adversarial Networks (GANs): Deepfake Videos, Clones…
67
Q

What is the Truth Quest survey by OECD?

A

A gamified, web-based survey, simulates a ‘real life’ social media site with true and false content, tests adults’ ability to spot false or misleading AI-generated online content
Calculates the Truth Quest score by dividing correct responses by total news items viewed.
40 765 people completed Truth Quest across five continents

68
Q

What is the outcome of the Truth Quest survey by OECD?

A
  • overall Truth Quest score: 60%, (correctly identified true and false content 60% of the time) True claims: 56%
    false or misleading content: 61%
  • Most respondents felt very or somewhat confident in spotting false content (rise with education and income, decline with age, men more confident than women)
  • Respondents who lacked confidence in their ability to recognize false content performed as well as those who felt confident.
69
Q

List three challenges AI poses to democracy.

A
  • disinformation and propaganda
  • surveillance and privacy issues
  • election manipulation
70
Q

List 3 ways deep-fakes may impact democratic discourse.

A
  • Disinformative video and audio
  • Exhaustion of critical thinking
  • The Liar’s Dividend
71
Q

Explain disinformative video and audio impact of deep-fakes

A

citizens may believe and remember online disinformation, which can be spread virally through social media

72
Q

Explaing exhaustion of critical thinking impact of deep-fakes

A

if citizens are unable to know with certainty what news content is true or false, this will exhaust their critical thinking skills leading to an inability to make informed political decisions

73
Q

Explain the Liar’s Dividend

A

politicians will be able to deny responsibility by suggesting that a true audio or video content is false, even if it is true (in the way that ‘fake news’ has become a way of deflecting media reporting)

74
Q

How does AI contribute to disinformation campaigns?

A

By enabling the creation of realistic deepfakes, automating social engineering tactics, and amplifying false narratives through AI-powered bots and algorithms.

75
Q

What measures can help to reduce AI’s impact on disinformation in elections?

A

Proactive policy measures, private sector actions, increased public awareness of AI-generated content.
(Overuse of AI-generated content may result in a numbing effect, and therefore result in limited influence of AI disinformation on voters)