Task 8 Flashcards

1
Q

What is the difference between an accident and an incident?

A

Accident → A negative event caused by chance or bad luck.
Incident → A minor accident that does not cause significant harm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does research suggest about accidents in the workplace?

A

Many accidents are not purely random; they often result from unsafe behaviors and predictable human tendencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is accident proneness?

A

Some individuals have a higher likelihood of experiencing accidents due to behavioral traits or risk perception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Risk Homeostasis Theory?

A

People adjust their behavior to maintain a certain level of risk. When safety measures improve, they may take more risks (e.g., driving faster when using anti-lock brakes).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does past experience influence risk perception?

A

People estimate risk based on past experiences and expectations, often leading to underestimation of everyday risks (e.g., driving) and overestimation of rare risks (e.g., flying).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do people underestimate familiar risks?

A

Overconfidence, habit, and lack of negative experiences lead people to take familiar activities (e.g., driving) less seriously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is human error often blamed for accidents?

A

It is cheaper to blame humans than redesign a system.
It reassures people that the system is safe, as the error seems preventable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why are most accidents multi-causal?

A

They result from a combination of workplace design, procedures, training, communication, and human limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the four major types of human error?

A

Slips & Lapses → Execution errors (e.g., forgetting a step in a routine task).
Rule-Based Mistakes → Misapplying a known rule (e.g., misusing a safety procedure).
Knowledge-Based Mistakes → Errors in problem-solving under unfamiliar conditions.
Violations → Intentional rule-breaking due to time pressure or overconfidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is automation brittleness?

A

Automation performs well under expected conditions but fails in unexpected situations, requiring human intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Out-of-the-Loop (OOTL) Problem?

A

When automation handles most tasks, humans lose situational awareness, making it harder to take control during failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What factors influence trust in automation?

A

System reliability & performance (most important factor).
User experience & training.
Transparency of AI decisions.
Recent system failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Automation Conundrum?

A

The more automation improves a system, the less engaged humans become, making them worse at taking control when needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does the HASO model describe?

A

It explains how human oversight, intervention, and interaction with automation affect system performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What factors affect a human’s situation awareness (SA) when overseeing automation?

A

How information is presented (clarity, transparency).
Monitoring & vigilance (humans get distracted easily).
Trust levels (over-trusting or under-trusting automation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the Lumberjack Effect in automation?

A

The more automation helps, the worse humans become at recovering from automation failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is Construal-Level Theory (CLT)?

A

People interpret actions at different levels:

High Construal → Focus on why an action is done (big-picture thinking).
Low Construal → Focus on how an action is done (step-by-step details).
18
Q

How do people perceive Artificial Agents (AAs)?

A

They see humans as having high-construal thinking (big-picture goals).
They see AAs as having low-construal thinking (following programmed steps).

19
Q

How does this perception affect AI persuasion?

A

AI is more persuasive when it delivers messages in low-construal language (e.g., “Apply sunscreen by rubbing it on your skin” rather than “Protect yourself from UV damage”).

20
Q

What happens when AI is seen as capable of learning?

A

AI is perceived as more human-like.
High-construal messages become more persuasive.
AI is seen as having autonomy and goals.

21
Q

What did Study 1a show about AI perception?

A

People perceive AI actions in low-construal terms (step-by-step processes) rather than goal-driven thinking.

22
Q

What did Study 2 show about AI persuasion?

A

AI persuades better when using low-construal messages (detailed instructions instead of abstract reasoning)

23
Q

What did Study 3 show about AI learning?

A

AI with learning abilities is seen as more human-like, making high-construal messages more effective.

24
Q

What did Study 5 show about human-AI comparisons?

A

AI can be seen as more human if described with human-like traits.
Humans can be seen as machine-like if stripped of emotions and autonomy.

25
Q

What are the ethical risks of AI persuasion?

A

Manipulation → AI could exploit cognitive biases to influence decisions.
Transparency Issues → AI should clearly state when it is persuading.
Human-AI Distinction → People should not confuse AI with real human advisors.

26
Q

How can AI ethically build trust?

A

Be transparent about its capabilities.
Avoid deception in persuasive messaging.
Provide accurate and unbiased recommendations.

27
Q

What challenges remain for AI trust and autonomy?

A

Balancing automation with human oversight to prevent OOTL errors.
Ensuring AI persuasion is ethical and unbiased.
Improving AI’s ability to explain its decisions clearly.

28
Q

What is the future of AI persuasion and trust?

A

AI systems will be designed to be more transparent, user-adaptive, and capable of ethical persuasion while maintaining human oversight.

29
Q

What is the difference between active and latent failures in accidents?

A

Active failures → Immediate human errors that directly cause an accident.
Latent failures → Hidden issues (e.g., poor system design, lack of training) that contribute to accidents.

30
Q

Why is blaming human error problematic?

A

It overlooks systemic failures, such as bad design, unclear instructions, or unrealistic expectations.

31
Q

What is Normalization of Deviance?

A

When risky behaviors become accepted over time because they haven’t yet led to failure (e.g., skipping safety checks).

32
Q

How does optimism bias affect risk-taking?

A

People believe they are less likely to experience negative events compared to others, leading to riskier behavior.

33
Q

What is error priming?

A

When people expect failure, they are more likely to make mistakes.

34
Q

What is the Out-of-the-Loop (OOTL) Problem?

A

When humans lose awareness of a task due to excessive reliance on automation, making it harder to take over when needed.

35
Q

What is the Framing Effect?

A

The way information is presented influences decision-making (e.g., 90% survival vs. 10% death rate).

36
Q

What is automation complacency?

A

When users become too dependent on automation, leading to reduced vigilance.

37
Q

What are the three phases of trust development in AI?

A

Initial Trust – Based on design, reputation, and first impressions.
Dynamic Trust – Changes with user experience and system performance.
Long-Term Trust – Built over time through consistency and reliability.

38
Q

What factors reduce trust in AI?

A

Unexplained system failures.
Lack of transparency in AI decisions.
Over-complexity (if the user does not understand the AI).

39
Q

How do people perceive AI agents in persuasion?

A

AI is more persuasive when it:

Uses low-construal (detailed) messages.
Adapts communication style based on user perception.
Is perceived as capable of learning and improving.
40
Q

How does framing AI as a learner affect trust?

A

If AI is seen as capable of learning, people trust its reasoning more and respond better to high-construal (goal-based) messages.