Lecture 10 Flashcards

1
Q

Lecture Goals

A
  • Understanding how algorithmic trust can lead to
    automation bias
  • Explain the factors that are behind the automation bias
  • Illustrate on the basis of cases how algorithmic biases
    can have negative impacts on one’s decision-making
    process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

TAM (Technology acceptance model)

A

Technology acceptance model:

What predicts one’s usage of technology?

  • Perceived ease of use
  • Perceived usefulness

Based on theory of planned behaviour:

  • To perform a specific behaviour, you need intention
  • Intention is conditional to parameters such as

attitude,
subjective norm,
and perceived behavioural control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

relation of TAM and trust

A

Trust has been shown to be a major component in using automated systems (Wu et al., 2011):

it seems to modulate the attitudes, intentions, and possibly the behavior of automated systems
users.

So, trust in technology, as seen , is important.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ALGO vs. HUMAN study

[This is the study that Logg et al. (2019) ran. In their first three experiments, they wished to
determine whether algorithms were more trusted than human judgments when concerned with:
- Weight estimation
- Song rank
- Attractiveness of a prospective mate
- Weight again
- Age
They also asked other researchers what they thought would happen in the romantic attraction
study.
]

A

What it found

Are people overconfident in algorithms and technology? It seems so.
The automation bias is the overreliance on algorithmic advice, up to the point that human
beings commit commission errors and omission errors (i.e., complacency).
The background for this bias comes from aviation.

some examples of this:

NASA
During a flight simulation, pilots were given automated actions to perform.
At one point, the automated agent told them to turn off engine 1, while the pilots could
see that the engine that was problematic was engine 2.
75% of the pilots turned of engine 1, despite knowing that of they didi in real life they would have died (Mosier et al., 1992).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Externalized accountability vs internalized accountibility affects on Automation baises

A

Interestingly, in a similar study to Nasa’s study, Mosier and colleagues (1998) tested whether making

people feel accountable for their flight performance would impact to what extent they
would bow to the automation bias.

It did not. Externalised experiences of accountability do not seem to affect automation biases.

However, their internalised sense of accountability has an impact, where pilots who had a
higher internal sense of accountability were less likely to be influenced by the automation
Bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Some factors facilitate the overreliance on automated
systems, according to Goddarde

is Confounding factors

A

Individual differences:

Confidence
Trust
Task experience
DSS experience

Environmental factors:
Task complexity
Time pressure

DSS - user interaction:
Cognitive fit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Omission error :

A

error made using omitting your own knowledge the opposite where u trust the machine it is an error of cognition - > Commission errors

More importantly, what they show that omission errors are associated with lower cognitive load. But not the complexity itself… This means that individuals attribute cognitive resources independently of the task complexity itself. we might not be entirely rational and allocate cognitive resources to what they should focus on.

basically, Omission errors are linked to less mental effort. This doesn’t mean the task is less complex, but rather that people use their mental resources separately from how complex the task actually is. They might rely on the automated system because it’s easier on their brain, not because the task is simple.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AI suppression strategy:

A

higher probability of misleading diagnosis through AI would be retracted, and physicians would be allowed to reassess their opinion.
with the suppression strategy, the AI-related mistakes were estimated to be
decreased by 8.6%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Effectiveness of AI

A

In conclusion AI is interesting and useful but humans are biassed and we get overconfident in the automated system. The best way to not rely on them is having the opportunity to not rely them when you don’t want to.

Although human beings are meant to be trusting, they are also biased. And the combination
of both can be dreadful, both regarding their own performance (i.e., metacognitive
overconfidence) and their reliance on automated systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ommison vs commison errors

A

Commission Errors:

Definition: Acting on incorrect or flawed advice from an automated system.
Example: A pilot follows a faulty automated navigation system’s directions, resulting in flying off course, despite noticing discrepancies.
Omission Errors:

Definition: Failing to act because of over-reliance on an automated system to manage everything correctly.
Example: A healthcare professional does not administer a necessary treatment because the automated system did not alert them to do so, even though there were signs the system missed.
Difference:

Commission Errors: Wrong actions taken based on automated advice.
Omission Errors: Necessary actions not taken due to over-reliance on automation.

in simple words:

Commission Errors: Mistakes made by following incorrect advice from the automated system.

Omission Errors: Mistakes made by failing to act because of over-reliance on the automated system, ignoring what we know or should do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly