Fall 17 Midterm Flashcards

1
Q

Internal Consistency

A

Consistency of construct across individual items of an outcome measure.

Like is there consistency among the questions in the survey.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Test-Retest

A

Consistency of a test when given to a person, unchanged in the outcome, on two different occasions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Intra-rater

A

Consistency of raters, compared to themselves, on two different occasions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Inter-rater

A

Consistency of raters compared to each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Internal Consistency Basic Study Design

A

Conduct the outcome measure on a group of people. Then analysze the intra-subject correlation between items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Test-Retest Basic Study Design

A

One person gives test to same people on different days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Intra-rater Basic Study Design

A

Several therapist give the test to the same people at different times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Inter-rater Basic Study Design

A

Therapist meausre the same participants and their scores are compared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Internal Consistency Common Statistical Results

A

Chronbach Alpha - a measure of correllation (ideal is between .7-.9 because if it were one then the questions would be too similar).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Test-Retest Common Statistical Results

A

ICC or Kappa (closer to 1 is better)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Intra-rater Common Statistical Results

A

ICC or Kappa (closer to 1 is better)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Inter-rater Common Statistical Results

A

ICC or Kappa (closer to 1 is better)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Internal Validity Appraisal Considerations

A

Sample size
Participants have a range of diversity in the outcome measure.
Participants are stable in characteristics of interest.
Same circumstances (consistent manner) for assessment each time.
Time appropriate between measurements (1 day)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Construct Validity

A

Does the outcome measure measure what it is intended to measure.

Construct Validity must have a theory behind it.
Therefore Convergent, Discriminate and Known groups has a theory behind it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Criterion: Concurrent

A

Two measures correlate at the same time point. Did it at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Criterion: Predictive

A

The outcome measure of interest is correlated with another outcome measure or outcome at a later point. Therefore it has to deal with Time. Did something change over TIME.

17
Q

Construct: Convergent

A

Does the outcome measure of interest correlate with another. Remember that this is very similar to concurrent but this is a criterion so there is a theory behind it.

You have two measures that your expecting to measure the same thing.

18
Q

Construct: Discriminative

A

Does the outcome of interest NOT correlate with a measure known to measure a different construct.

Your measuring one thing and then measuring something else that you would expect to have different results to show that it is not measure the same thing.

19
Q

Construct: Known Groups

A

Does the outcome measure of interest produce different results for groups of people known to be different on the construct the outcome is supposed to test.

Basically, you KNOW they will be different.
Like teacher taking a test vs students taking a test the teachers haven’t taught yet.

20
Q

Criterion Basic Study Design

A

measure the outcome of interest and established outcome measure.

21
Q

Construct Basic Study Design

A

Comparing the outcome measure of interest to the established gold standard of reference criterion.

22
Q

Minimal Detectable Change

A

The minimum amount of change required on an outcome measure to exceed anticipated measurement error and variability.

Your actually seeing change, and not change due to error.

23
Q

Responsiveness

A

An outcome measures ability to detect change over time.

24
Q

Minimal Clinically Important Difference

A

The minimum amount of change on an outcome measure which patients are likely to perceive as beneficial.

25
Q

Minimal Detectable Change Basic Design Study

A

Conduct a test-retest reliability study and then calculate the MDC

26
Q

Responsiveness Basic Design Study

A

Measure a population at two time points, in between which you expect them to change.

27
Q

Minimal Clinically Important Difference

A

Measure a group likely to experience change and concurrently measure a gold standard representing meaningful change. Find the cutoff score that best detects meaningful change.

28
Q

Minimal Detectable Change Common Statistical Results

A

Points on the outcome measure scale

29
Q

Responsiveness Common Statistical Results

A

Effect Size

30
Q

Minimal Clinically Important Difference Common Statistical Results

A

Cutoff score and its sensitivity, specificity, likelihood ratio.

31
Q

Spearman Rho (Correlation -1 to 1)

A
Criterion Concurrent
Criterion Predictive
Construct Convergent
Construct Discriminitive
Criterion
Construct
32
Q

Criterion Common Statistical Results

A

Spearman Rho (Correlation -1 to 1) OR Pearson correlation Coefficient (-1 to 1)

33
Q

Known Groups Statistical Results

A

Analysis of variance for linear trends (p VALUE)