Lectures 1-4 Flashcards

1
Q

What are the 5 As of the Evidence based practice model?

A
  1. Ask the right (PICO) question
  2. Access relevant evidence
  3. Appraise the evidence
  4. Apply the evidence (e.g., intervention, assessment tool)
  5. Assess its effectiveness.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does PICO stand for?

A

Population/patient/problem

Intervention

Comparison

Outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does PECO stand for ?

A

Population/patient/ problem

Exposure

Comparison

Outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does RAMMbo stand for

A

Recrutiment (are Participants representative of target population)

Allocation (was the assignment to treatments randomised? How was it randomised? Were the groups similar at the start of the trial?)

Maintenance (were the individuals writhing groups treated equally? Were the outcomes ascertained and analysed for most ps?)

Measurement (were the pots and clinicians Blinded to treatment? Were measurements Objective and standardised?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a consort diagram?

A

A flow chart of the participants excluded and included in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is stratified cluster sampling?

A

When you deliberately recruit to get particular rations of subgroups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the mean and standard deviation of a z score?

A
Mean = 0
SD = 1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the formula for calculating a raw score?

You need to memorise this calculator for the exam

A

Z = raw score - mean of raw score / SD of raw score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the most compelling need for scientist practitioner skills?

A

When the evidence is equivocal or lacking

Shapiro reading

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 6 Core competencies of the scientist practitioner as according to the Shapiro paper?

A

Core competencies:

  1. Delivering assessment and intervention procedures in accordance with protocols
  2. Accessing and integrating scientific findings to inform healthcare decisions.
  3. Framing and testing hypotheses that inform healthcare decisions.
  4. Building and maintaining effective teamwork with other healthcare professions that supports the delivery of scientist-practitioner contributions.
  5. Research-based training and support to other health professions in the delivery of psychological care.
  6. Contributing to practice-based research and development to improve the quality and effectiveness of psychological aspects of health care.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the mean and SD of a t score?

How do you get a t score from a z score?

A
Mean = 50
SD = 10

From z score into T score, multiply by 10 and add 50

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What percentage of scores in a normal distribution fall between -1 SD and +1 SD above the mean

A

68% of scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

95% of scores fall between … Standard deviations on either side of the mean

A

2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Where abouts can you lose resolution in within percentile ranks?

What does this mean?

A

Lose resolution in the middle percentile ranks (i.e., around 50). This means it is harder to differentiate scores here

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How many divisions in the stanine scale?

How wide are the SD’s

A

9 divisions, each division is 0.5 a standard deviation wide.
The middle bank (5, the scale goes from 1-9) is from negative .25 to positive .25 SD’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are narrative reports?

A

Grade 6 = strong

Grade 7 = outstanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is reliability?

A

Consistency of measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Just because measurements are … Doesn’t mean they are valid. (But you can’t have a valid test if it is completely unreliable).

A

Reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

We can estimate test reliability via:

A
  1. Internal consistency
  2. Test-retest reliability
  3. Alternate/parallel-forms reliability
  4. Inter-rather reliability
20
Q

What is internal consistency? What would you expect if your scale or test wAs reliable?

What are other names for internal consistency?

A

You’d expect items to correlate highly with each other to indicate that they are measuring the same concept

Other names: inter-item consistency, internal coherence

21
Q

What is the difference between alternate forms and parallel forms tests?

A

Parallel =stricter criteria

22
Q

What is validity?

A

Validity is the extent to which the measure actually measures what it’s supposed to measure

23
Q

Give some examples of validity

A

Face validity

Forms of construct validity:
Content validity 
Criterion validity (I.e., concurrent, predictive, incremental) 
Convergent validity 
Divergent validity
24
Q

Does face validity have an psychometric benefits?

A

Yes it has indirect uses for preventing missing data

25
Q

Content validity example?

A

Hazard perceptions test = filming in Brisbane/Queensland compared to in UK

26
Q

What is criterion related validity?

A

A judgement regarding how adequately A score on A test can be used to infer an individual’s most probably standing on some measure of item rest (the criterion)
Example: surgery competence simulator test

27
Q

What is criterion contamination?

A

This is an example of a situation with circular logic. The criterion we’re using to assess the validity of our test is pre-determined by the test, thereby undermining the logic of criterion validity.

28
Q

Why is it important for both the test and its criterion to have decent reliability?

A

Because the reliability of each limits the size of how big the validity coefficient (the correlation between test score and the criterion) can be. So you can have a perfectly reliable test and criterion but NOT NECESSARILY have high validity

29
Q

Efficacy is more tightly controlled. Effectiveness is more reflective of real world outcomes (with variables not all aligned, for example more reflective of clinical settings whereby therapy is not manual used and therapist adherence is not monitored).

A

What is the difference between efficacy and effectiveness?

30
Q

What is a test with multiple subscales called?

What is a test with no subscales called?

A
  1. Heterogeneous

2. Homogeneous

31
Q

How many ppts do you need for factor analysis?

A

100-200

32
Q

What is construct validity?

A

How well the scores on your test reflect the construct (I.e., the trait or characteristic) that your test is supposed to be measuring

33
Q

Describe what a confidence interval is

A

The range of scores that is likely to contain a person’s true score (margin of error)

95% CI = + - 2 SD

34
Q

If the reliable change index is greater than … then you have a statistically significant change

A

1.96

35
Q

In ROC curves as the true positive rate increases with a changed pass mark, false positive rage does too (I.e., sensitivity increases but…)

A

Specificity decreases. Plotting the curve like this allows you to choose the pass mark you want, based on the correct hit rate and false positive rate you’re willing to tolerate

36
Q

How can you tell what a good roc curve looks like?

A

The more the line curves away from the diagonal, the better he test is at discriminating people with disorder from controls

37
Q

Where is the best pass mark to choose in a ROC curve (this example sorting concussed from non-concussed)

A

The point on the curve where the sum of sensitivity and specificity is highest (they maximises correct misses and minimises false positives/negatives)

38
Q

How should you calculate your pass mark on the roc curve

A

Add your sensitivity column to your specificity column on excel then which ever outputs the highest value (I.e., sum
If sensitivity and specificity is highest)

39
Q

What is an excellent ROC curve score and what is a fail score?

A

.90-1.0 excellent

.50 - .60 fail

40
Q

The Stanford Binet intelligence test measures

A
  1. Fluid reasoning (intelligence)
  2. Crystallised intelligence
  3. Quantitative reasoning
  4. Visual-spatial reasoning
  5. Working (short term) memory

With 10 core subsets and 5 factor scores. You only want to combine the factors if the scores across the 10 subsets are similar)

41
Q

WISC-IV
Wechsler intelligence scale for children

10 core sunrises, 4 groups
What are these?

A
  1. Verbal comprehension index
  2. Perceptual reasoning index
  3. Working memory index
  4. Processing speed index
42
Q

What are the three wechsler intelligence scales?

A

WPPSI 3-7 year olds
WISC-IV (primary school)
WAIS-III (adults)

43
Q

What is a ROC curve? (Lecture 4)

A

A ROC curve is a plot of correct positive rate (sensitivity) versus false positive rate (1- specificity) where each point on the curve is a different pass mark for the test

44
Q

How do you calculate relative risk reduction?

A

Control event rate - experimental event rate / control event rate

45
Q

How do you calculate absolute risk reduction ?

A

Control event rate - experimental event rate