Wk 13 - Extreme 2 Flashcards

1
Q

Can you do the 2x2 calculation reliably? (Suggest you invent your own figures and have a practice) (x5)

A

Remember to check whether the figures are percentages or probabilities – and convert to probabilities if needed (divide by 100; move decimal point 2 places to the left).

Columns are: Disease present, Disease absent
Rows are: Test positive, Test negative

Giving: Correct positives and False positives, over
False negatives, Correct negatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain what sensitivity is in the context of evaluating diagnostic tests (x3)

A

% of people with disorder who test positive
(correct positive rate;
i.e. correct positives divided by total with disease)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How could you design a diagnostic test that would correctly diagnose every person who had a disease as having the disease? (x2)

A

By adopting a liberal response bias

ie err on the side of positive if there’s even slight chance of having the disorder

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is it important that health professionals understand the calculations described in this lecture? (x2)

A

So that they don’t overstate the importance of a positive test result -
Can explain to patients the logic behind testing/not panic them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What procedure would you use to choose the optimal pass mark for your diagnostic test, if you wanted to maximize the discriminatory power of the test? (x1)
What was this initially designed for? (x1)

A

Use a ROC curve - Receiver Operating Characteristic

For radar in World War Two (where operators had to distinguish signals from noise on a radar screen)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What two variables determine the number of correct hits and false positives in a diagnostic test?

A

Correct positive rate (sensitivity) versus

False positive rate (1 – specificity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a ROC curve? (x3)

A

A plot of Correct positive rate (sensitivity) versus
False positive rate (1 – specificity)
where each point on the curve is a different “pass mark” for the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why might we want to choose different cut offs for diagnostic tests under different clinical conditions? (x1 plus e.g.)

A

It’s a judgment call dependent on the relative costs of false positive and false negatives
Eg in breast screens, tolerate heaps of FPs in order to get higher TPs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Draw a histogram of scores on a diagnostic test for a disease, showing (1) the distribution of people without a particular disease and (2) the distribution of people with a disease, a situation where the test does its job but not perfectly. Mark on a potential pass mark for the test and label which parts of the curves refer to (1) correct hits (2) false positives (3) correct negatives and (4) false negatives.

A

Do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Sketch a ROC curve of a test that has reasonable diagnostic capability. Mark on the pass mark that is likely to yield greatest discrimination between those with and without the disease, with a note explaining why you placed the pass mark where you did.

A

Do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sketch two ROC curves, one showing a “worthless” test and one showing an “excellent” test.

A

Do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

On a ROC curve, how can we quantify the accuracy (diagnostic ability) of the test? (x2)

A

Buy calculating the area under the curve

as a proportion of the whole, so 1 = max, .5 = chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In terms of sensitivity and specificity, how would you choose the “most discriminating” pass mark for a diagnostic test? (x1)

A

By adding them up to find the point that gives the highest sum between them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the key thing you might want to know following a positive result on a medical or behavioural test? (x1)

A

What are the chances you ACTUALLY have the disease?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is often missed when people think about the accuracy of diagnostic tests? (x3 terms for same thing, plus define)

A

Base rate - the likelihood of the disease occurring in the population
Pre-test probability
Prior probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is base rate neglect bias?

A

Failure to consider the prior/pre-test probability of disease occurrence in calculations of test accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The probability a woman (aged 40-50, in a region with routine screening) having breast cancer is 0.8%. If a woman has breast cancer, the probability is 90% that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7% that she will still have a positive mammogram.
What is the probability that she does have breast cancer if she has a positive mammogram?

(Practice the calculation)
Pre-test probability = .008 (=.8%)
Sensitivity (% correct positives) = .90 (=90%)
Specificity (% correct negatives: 100%-7%) = .93 (=93%)

A

Pre-test probability = .008 (=.8%)
Sensitivity (% correct positives) = .90 (=90%)
Specificity (% correct negatives: 100%-7%) = .93 (=93%)

  1. Choose arbitrary number of people (100,000) and put in grand total box.
  2. Multiply Grand Total by Pre-test probability to get Total With Disorder
    = 100,000 x .008 = 800
  3. Grand Total minus Total With Disorder = Total without disorder
    = 100,000 – 800 = 99,200
  4. Multiply Total With Disorder by Sensitivity to get Correct Positives
    = 800 x .90 = 720
  5. Multiply Total Without Disorder by Specificity to get Correct Negatives
    = 99,200 x .93 = 92,256
  6. Compute False Positives and False Negatives by subtracting Correct Positives/Correct Negatives from column totals
    FP = 99,200 – 92,256 = 6,944
    FN = 800-720 = 80
  7. Compute Total positive and Total negative by adding up rows
    TP = 720 + 6,944 = 7,664
    TN = 80 + 92,256 = 92,336
  8. Predictive value of positive test (Positive Predictive Value) Correct Positive ÷ Total Positive
    = 720/7664 = .094
  9. Predictive value of negative test is Correct Negatives ÷ Total Negatives =
    92256/92336 = .9999

Answer = 9%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Explain what specificity is in the context of evaluating diagnostic tests (x3)

A

% of people WITHOUT the disorder who test negative
(correct negative rate;
i.e. correct negatives divided by total without disease)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the difference between sensitivity definitions in diagnostic test evaluation and signal detection theory? (x2)

A

In diagnostic, is % correct hits, but in

Signal detection, is d-prime (d’) - discriminatory ability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In 1998, what % of physicians could accurately assess the diagnostic ability of breast cancer screening?

A

Most said 90% off top of head -
Is actually 9%
But only 10% of them could do the proper calculations

21
Q

Why do many diagnostic tests adopt a liberal response bias? (x2)

A

Considered better minimise false negatives even though that means increased false positives,
Because false negatives are regarded as worse consequentially than false positives

22
Q

What do we know about a test’s accuracy if it makes a high number of correct diagnoses? (x2)

A

Not much -

It may also give a high number of false positives

23
Q

What are the potential costs of false positive diagnostic test results? (2 egs, x2 and x2)

A

With repeated screening (10 screens), 1 in 2 women without breast cancer will probably receive at least 1 false positive =
300,000 women per year without breast cancer in US undergo unnecessary breast surgery

22 blood donors who were notified that they had tested HIV positive in Florida 1987, 7 committed suicide (Gigerenzer, 2002), when
only 50/50 chance it was right

24
Q

What’s the actual chance of having HIV if you receive a positive test, assuming you have no known HIV risk behaviour?
(It’s the base rate here that’s crucial - these days, lots of ‘no risk’ people get routinely tested)

Practice the calculation:
Pre-test probability = .0001
Sensitivity (% correct positives) = .9998
Specificity (% correct negatives) = .9999

A
  1. Choose arbitrary number of people (100,000) and put in grand total box.
  2. Multiply Grand Total by Pre-test probability to get Total With Disorder
    = 100,000 x .0001 = 10
  3. Grand Total minus Total With Disorder = Total without disorder
    = 100,000 – 10 = 99,990
  4. Multiply Total With Disorder by Sensitivity to get Correct Positives
    = 10 x .9998 = 9.998
  5. Multiply Total Without Disorder by Specificity to get Correct Negatives
    = 99,990 x .9999 = 99980.001
  6. Compute False Positives and False Negatives by subtracting Correct Positives/Correct Negatives from column totals
    FP = 99,990 – 99980.001 = 9.999
    FN = 10-9.998 = .002
  7. Compute Total positive and Total negative by adding up rows
    TP = 9.998 + 9.999 = 19.997
    TN = .002 + 99980.001 = 99980.003
  8. Predictive value of positive test (Positive Predictive Value) Correct Positive ÷ Total Positive
    = 9.998/19.997 = .499
  9. Predictive value of negative test is Correct Negatives ÷ Total Negatives =
    99980.001/99980.003 = .999

Answer = 50%

25
Q

What are the general causes of concussion in Australia? (x5)

A
Car crashes
Falls
Recreational activities
Assaults
Occupational accidents
26
Q

What is the most common form of traumatic brain injury? (x1)

A

Concussion

27
Q

What are the long term effects of repeated concussion? (x4)

A

Cognitive effects/defecits sometimes evident months later
Cumulative…
Potential permanent brain damage
Associated with early onset dementia

28
Q

How did Sara Olsen’s honours project seek to assess the improvement of concussion detection through the addition of a new test (of phonological loop processing) to an existing battery? (x6)

A

Recruited Ps with/out head injuries -
Control was orthopaedic injuries for similar pain/stress etc
Gave battery, including new test to get total cognitive score -
Calculated the diff with new inclusion
Used ROC curve to maximize ability of the scores to make correct classifications for both old and new batteries
Then checked if new test made significant improvement

29
Q

On a ROC curve, what happens when you increase the pass mark to reflect a change in pass mark? (x2)

A

False positive rate goes up too

ie sensitivity increases, but specificity decreases

30
Q

What can we infer from the ‘curviness’ of a ROC curve? (x2)

What does the area under the curve equate to? (x2)

A

The ability of the test to discriminate
ie better the further away from diagonal it gets
The overal diagnostic accuracy of the test (regardless of pass mark chosen/actual scores)

31
Q

What point on a ROC curve shows us where the most accurate discrimination will take place? (x3)

A

The spot where the sum of sensitivity and specificity is highest
ie it maximises correct hits and misses, and minimises false positives and negatives

32
Q

How do you use a ROC curve to calculate maximum categorisation/discrimination?

A

Paste SPSS ROC curve function table into Excel
Add up sensitivity and specificity for every pass mark
Choose the highest - and what the pass mark is for that point

33
Q

What are the cut-offs for diagnostic accuracy as revealed by the area under a ROC curve? (x6)

A

.90-1.0 excellent (1 = maximum; fills entire graph)
.80-.90 good
.70-.80 fair
.60-.70 poor
.50.-60 fail
.50 is a straight diagonal which is chance

34
Q

What is the purpose of psychophysical measurement? (x1)

A

To tell us how human’s perceive the world

35
Q

What is the Thompson effect? (x1 plus original test items and RL e.g.)

A

When images that are reduced in contrast appear to move slower than they are
When high contrast horizontal lines move at same time as low contrast, low seem slower
eg misperception of speed in fog conditions

36
Q

What relationship to crash risk does the Thompson effect have? (x2)

A

Driving speed is one of biggest predictors of accident risk

So an effect that may influence it is of interest - ie fog = reduced contrast

37
Q

What RL medical condition may lead to the Thompson effect? (x2)
What is its prevalence? (x1)
With what effect on people? (x1)

A

Cataracts - filters reduce contrast in visual field
= misperception object speed
50% of people have them by age 75
The 2.5 times the crash risk of controls

38
Q

What is the Method of Constant Stimuli? (x3)

A

Involves generating range of stimuli that vary in whatever you’re interested in
Presenting them in random order, and
Getting people to make a judgement on them

39
Q

Draw a graph of a psychometric function as generated by a psychophysical discrimination experiment (e.g. someone attempting to discriminate between different speeds in a driving simulator). Label both axes. On the graph, draw what the data of (1) someone who performed at chance and (2) someone who can correctly detect even the smallest difference between two speeds would look like.

A

Do

40
Q

Why might you induce the Thompson effect with foggy goggles, rather than using people with cataracts? (x1)

A

In order to reduce other confounds of e.g. raging

41
Q

How did mark use the Method of Constant Stimuli in his research? (x7)

A

Used single video sequence of travelling along a clear road.
This was speeded up on computer to generate different speeds
Ps shown pairs of scenes at different speeds-
Had to judge which scene was faster
One was always 60kph (reference scene) and other ranged 50kph - 70kph (test scene)
Two conditions - one with both clips clear, one with test scene made foggy
Plotted psychometric function for eachPs

42
Q

When do you use a psychometric function? (x1)

And what does the slope of the graph tell us? (x1)

A

To graph the performance of Ps on method of constant stimuli task
The steeper it is, the better people can tell the stimuli apart

43
Q

On a psychometric function, what do we learn from the point where the line crosses the 0.5 mark? (x1 plus e.g. x2)

A

Whether there’s any systematic bias
ie if it crosses the 0.5 mark at the reference speed of 60kph then there is no bias:
Ps is 50% likely to say test scene speed faster and 50% likely to say test scene speed slower)

44
Q

(Using driving research example) What shape line indicates perfect performance on a psychometric function of the method of constant stimuli task? (x3)
What indicates chance? (x1)

A

Tail of ‘s’ is flat (at probability of zero) for those scenes slower than reference speed (60km),
Passes through .5 mark at 60km, then
Head of ‘s’ is flat across top (at probability of 1)

Function is flat, through the .5 mark

45
Q

What did psychometric functions reveal about the impact of fog on perceptions of driving speed? (x5)

A

The fog/clear line crossed 0.5 point well to the right of reference speed (60km) -
means fog makes things look slower.
ie, people are saying the 64km test scene with fog looks about the same as 60km scene without fog.
The fog/clear line is also shallower than the clear/clear line -
means that fog makes speeds harder to tell apart.

46
Q

How were the psychometric functions of fog/clear driving data used to interpret the results of the experiment? (x3)

A

Calculate the slope and 0.5 crossing point of the psychometric function
For both the fog and clear conditions for each Ps,
By fitting a mathematically-defined curve to the data

47
Q

What if people with cataracts calibrate for the shift in their perception of speed using their speedometers? (x1)

A

May still have an increased chance of misjudging speed because they find it harder to tell different speeds apart

48
Q

What are three possible solutions to the problem of the Thompson effect in drivers?

A

Remove cataracts faster
Increase contrast in road environment
Give drivers perceptual training to improve their ability to tell apart different speeds