Lecture 2 'The Problem with Human Error' Flashcards

1
Q

What is the definition of ‘human error’?

A

inappropriate human behaviour that lowers levels of system effectiveness or safety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the four classifications of human error? Describe the intentionality of each one

A
  1. Slips - unintended incorrect act
  2. Lapses - unintended failure to act
  3. Mistakes - intended, knowledge/rule-based
  4. Violations - intended non-compliance (benign/malign)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What kind of factors are associated with accidents?

A
1 - employee related
2 - job-related
3 - equipment/tool relared
4 - physical environment
5 - socia;/psychological environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When designing warning labels, what three things do you need to consider?

A

People must be able to NOTICE/SEE the warning
People must be able to READ the warning
People must COMPLY with the warning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

According to a study by Nevo, healthcare practitioners were generally worse at remembering to wash their hands coming into or leaving an examination room? What was the best method for compliance?

A

Leaving (because it was out of sight)

Having a warning label = best. However, long term effect may be eliminated through habituation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are two problems with defining human error as being caused by ‘inappropriate human behaviour’?

A
  1. “inappropriate” human behaviour often RAISES levels of system effectiveness or safety
  2. “appropriate” human behaviour can cause accidents by lowering system effectiveness or safety
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the Swiss Cheese model

A

Accidents are caused by multiple factors which have ‘gone through’ layers of defence put in place by an organisation. Despite multiple layers, holes may still line up enabling error trajectory to pass through the holes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In the Swiss Cheese model, error trajectory passes through the holes, what is this called?

A

latent conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are six limitations of the Swiss Cheese model?

A
  • sometimes only in hindsight
  • cannot predict future failures
  • metaphor only (what are “holes”?)
  • focus on defence = more defence not necessarily safer
  • suggests all accidents are preventable
  • current thinking: accidents caused by processes being too complex to be fully predictable and therefore fully preventable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Aircraft carriers are considered to be systems full of holes, but yet it works. What are two reasons for this?

A

flexibility

adaptability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Resilience engineering focuses on what to combat what?

A

how good human adaptability and variability is - to combat current thinking of “blame person/system > fix person/system”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference between Safety I and Safety II?

A

Safety I - assumption that safety could be established/maintained by keeping human performance within boundaries or norms. “Error” is something that can be counted

Safety II - people can actually be positive to safety by being adaptive / may address gaps in system design / unplanned situations. “Error” isn’t as important as needed a theory of action including an account of performance variability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the 10 misguided questions about error? HARD BONUS QUESTION

A
  1. Was it mechanical failure or human error?
  2. Why do safe systems fail?
  3. Why are doctors more dangerous than gun owners?
  4. Don’t errors exist?
  5. If you lose situation awareness, what replaces it?
  6. Why do operators become complacent?
  7. Why don’t they follow procedures?
  8. Can we automate human error out of the system?
  9. Will the system be safe?
  10. Should people be held accountable for mistakes?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Compare the old vs. new view of whether accidents occur because of mechanical failure or human error?

A

Old = If no mechanical failure, must be human error (error is satisfactory to explain accident)

New = Error as a starting point for investigation (not the cause)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the formula for human error?

A

human error = f (1-mechanical error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. What does Dekker (2005) suggest is a reason why a safe system may fail?
A

Efficiency often takes precedence over thoroughness

17
Q
  1. Why does Dekker (2005) believe it is misguided to ask why doctors are more dangerous than gun owners?
A

Because counting errors / looking at statistics will not help to understand system safety

18
Q
  1. Why does Dekker (2005) believe it is misguided to ask if errors exist?
A

An ‘error’ is only considered so by the outside. Humans attribute the cause to an ‘error’ of some sort to make sense of the world

19
Q
  1. Why does Dekker (2005) believe it is misguided to ask what replaces situation awareness if it becomes lost?
A

Shared situation awareness is folk concept: There is no ideal level of “situation awareness”

20
Q
  1. Why does Dekker (2005) believe it is misguided to ask why operators become complacent?
A

Automation complacency is a folk concept:

21
Q
  1. Why does Dekker (2005) beleive it is misguided to ask why operators do not follow procedures?
A

Sometimes procedures do not work (e.g. power plant licencing test failed when plant used own operations; but when they followed procedure a major procedural loop was discovered)

22
Q
  1. Why is it misguided to ask if we can automate human error out of the system? / Is it possible to automate human error out of the system? (Q. 12)
A

No because automation increases human workload (have to check automation constantly)

23
Q

Why does Dekker (2005) believe it is misguided to ask if the system will be safe?

A

Never a guarantee, system is always changing/adapting

24
Q

Why does Dekker (2005) believe it is misguided to ask if we should hold people accountable for their mistakes?

A

> Cannot punish people and have organisational learning at the same time

> Second victim issue: (e.g. nurse who committed suicide after mistake)

25
Q

(Q. 10) Why is “human error” not an accurate or useful concept for making complex sociotechnical
systems safer, according to many current human factors theorists?

A

“Human error” does not identify the aspects of a situation that make the activity that led to an
accident more likely

26
Q

(Q. 16) According to DEKKER (2006) “the operator lost situation awareness” is what kind of statement?

A

Only a description, using a theoretical construct, of how accidents sometimes happen

27
Q

(Q. 11) Selecting the wrong procedure to solve a problem in an industrial plant, but without knowing the
procedure is wrong, is an example of which kind of error?

A. Slip
B. Lapse
C. Mistake
D. Violation

A

(incorrect act + not intended) = Slip

28
Q

(Q. 8) Which is the best description of the message underlying Reason’s (1990) Swiss Cheese Model?

A. More “slices” of defence create more opportunities for errors-producing conditions to be
missed.
B. The model shows that the best way to reduce accidents is to prevent workers from making
active failures.
C. More “slices” of system defence make it more likely that errors will be averted.
D. A model of system defences can help analysts predict the build-up of latent failures.

A

C. More “slices” of system defence make it more likely that errors will be averted.

29
Q

(Q.9) Which of the following is a reasonable criticism of Reason’s (1990) Swiss Cheese Model?

a. Latent failures are difficult to identify in hindsight.
b. The model suggests that all accidents are preventable.
c. The model cannot illustrate the principle of “defence in depth”.
d. The model cannot distinguish between plausible “active triggers” for an accident.

A

b. The model suggests that all accidents are preventable (are they?)