2 - Measurement Flashcards

1
Q

what is a discriminative instrument?

A

used to sort individuals into groups (ie based on who has criteria we want or not) eg diagnostic test, screening tool, methods of evaluating eligibility criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

what are 2 ways to determine responsiveness? which is more common?

A
  • anchor-based approach (more common) and distribution-based approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is more important reliability or validity?

A
  • see answer to forum question
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

describe standard error of measurement

A
  • estimate of the measure’s ability to differeniate among patients
  • determines whether true change has occured
  • closer to 0 is better
  • ie looking at people who haven’t changed (a blood glucose reading says x, but that doesn’t men there is no error for this)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how do I make readers understand my results for comparing btw 2 groups?

A
  • provide mean difference and 95% CI around that mean diff
  • tell them the MCID ad whether MCID falls inside or outsde 95% CI (inside = inconclusive, outside = conclusive)
  • provice number needed to treat (proportion of patients in experimental or control group who changed by an important amount)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what type of differences does mean difference look for? Is this a validity or reliability issue?

A

systematic differences (ie diff in the way people are measuring - a validity issue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is disease-specific HRQOL? examples?

A
  • measures specific aspects of health (ie specific to the disease of interest)
  • cant compare across clinical areas (only withing - for example which one offers more relief)
  • easier to detect change bc questions are more specific
  • eg: WOMAC (for patients w osteoarthritis)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define cost analysis

A

does not consider the effect of treatment

from chart: examines only costs but there is a comparison btw 2 or more alternatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define reliability in terms of the formula

A
  • a ratio of the true score to the true score plus its associated error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is sensitivity to change

A

the ability to measure change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

define face validity

A
  • face value (patients)
  • are questions asked reflective of what they experience with this particular disease?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

describe the information about incremental cost-effectiveness quadrants

A
  • upper left and lower right easiest to make a decision on
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is agreement

A
  • how 2 things change according to each other (taking into account systematic differnces - y-intercept)
  • good for reliability (btw 0 and 1, 1 being perfect association)
  • ICC/Kappa
  • can’t have more validity than reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is internal consistency reliability? most common example? what should values be at?

A
  • extent to which items on the questionnaire are associted w each other
  • eg a correlation of 100% means if you answer yes to 1, will answer yes to the next etc - these q’s are redundant so take out
  • values should be 80-90% (0.8/0.9)
  • common example = chronbach’s alpha
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does one use a standard error of measure with a confidence interval? How to calculate 95% CI for score of 64, SEM 5.

A

SEM x 1.96 = 95% CI

SEM x 1.64 = 90% CI

SEM x 1.28 = 80% CI

  • note the middle score is the z-value which is constant!
  • 5 x 1.96 = +/- 10 and therefore 95% confident that score is btw 54 and 74
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define pearson’s r, Interclass correlation coefficient (ICC), spearman’s rho and weighted kappa wrt association vs agreement and continuous vs categorical

A

pearson’s r: association, continuous

Interclass correlation coefficient (ICC): agreement, continuous

spearman’s rho: association, categorical

weighted kappa: agreement, categorical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is criterion validity?

  • predictive vs concurrent
A
  • behaves as expected compared to gold standard (predictive/concurrent)
  • the correlation of a scale with some other measure of the trait or disorder (ideally a gold standard or criterion measure) * gold standard needed for this!!
  • predictive = administer new scale and see how well it predicts the event in the future
  • concurrent = simultaneously administer the new scale with the criterion measure and determine the association
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

for the ICF (international classification of functioning, disability, and health), what are the 4 defining health areas? what are the modifiers?

A

1) body function: physiological/psychological (includes pain and mental disorder) 2) body structures: anatomical 3) activity: performance of a task or action 4) participation: involvement in meaningful, fulfilling, and satisfying activities contextual factors: age, coping strategies, social attitudes, education, experience etc (can modify ur health in any of these areas)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a predictive instrument?

A

used to predict the future (or result/product of the experiment) - measure something now that will predict something happening in the future an important validity indicator eg MCAT, LSAT etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

challenges: applicability (costs vary)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

compare and contrast self-reported function and performance based measures

A
  • both attempt to measure activity limitations
  • performance: ie walk test, strength, ROM
  • self-reported function: a patient reported outcome measure (PRO), more clinically relevent, eg: lower extremity functional scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what are the 3 types of cost in the full economic evaluation?

A
  1. cost-effectiveness analysis
  2. cost-utility analysis
  3. cost-benefit analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

why do we use surrogate outcomes?

A
  • they increase efficiency
  • easier faster and cheaper to measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  • describe the distribution-based approach for measuring responsiveness
A
  • for people who aren’t expected to change (ie maybe chronic disease) - average of T1-T2 will be 0 (not expected to change)
  • again measuring at 2 different time points
  • plot distribution and decide cut-off point above which significant change has occured
  • for people who are expected to change, same thing but this time arbitrary line is to left of bell curve
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what is the tool’s metric?

A
  • interpreting your results or making sure your results are interpretable to readers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

define: precision

A
  • a measure of the extent to which repeated measurements come up with the same value
  • this is about the error - how much can you trust that the value is representative of the true score?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what does PRO stand for? example?

A

patient reported outcome measure

eg health releted QOL

24
Q

what is a surrogate outcome? examples?

A

outcome measures that are not of direct practical importance but are beleived to reflect outcomes that are important

  • it is indirectly important ot patients (they don’t care
  • these outcomes arent perfect, can’t conclude it causes something
    eg: cholesterol level
24
Q

from what prespective can costs be represented as an outcome? (4) - what is the most common?

A
  • individual
  • ministry of health (most common)
  • society (sick days etc)
  • third-party power (insurance company)

* there is usually more than 1 of these views being represented

24
Q
  • describe STC wrt responisveness
A
  • STC is a necessary but insufficient condition for responsiveness
  • the problem with responsiveness is how are we going to determine/define what is clinically important?
  • see lecture notes p 22, last slide
25
Q

what are systematic errors a measure of?

A

validity

26
Q

examples of continuous outcomes

A
  • wieght, blood pressure, etc
27
Q

what does it mean if your score exceeds MDC (CI = 95%)

A
  • we can be 95% confident that a true change has occurred
  • OR upon repeated assessments, 95% of stable patients will change by less than the reported interval
28
Q

How do we use SEM to detect real change (ie change assessed over time)?

A
  • to calculate difference btw present score and prevous score
  • use Minimal Detectable Change (MDC) (aka smallest detectable difference)
  • SEM x 1.96 X (√ 2) = MDC95
  • then take difference in score (ie first score was 64, next was 80, so 16 diff) and compare w MDC95
29
Q

what is a patient-important outcome? examples? what part of ICF is this related to?

A
  • outcome measures that are of direct practical importance (patients consider them to be important)
  • eg: survival, pain, PROs (patient-reported outcome measure) (eg QOL, functional ability)
  • related to ICF activity/participation
30
Q

what is health related QOL

A

an attempt to measure the broad concept of health (physical mental social)

30
Q

define: cost effectiveness

A
  • measurement of resource consumption and outcome of the intervention
  • requires a common outcome btw interventions being compared
  • eg effect per unit cost (life year gained per dollar spent), costs per unit of effect (cost per case detected etc)
31
Q

describe pearson’s R in terms of whether or not it is a good measure of validity

A
  • pearson’s r is good for validity (association) but it is not the best measure for reliability (precision/agreement)
32
Q

What is construct validity?

  • convergent vs discriminant
A
  • like a mini-theory to explain the relationships among various behaviours or attitudes
  • more abstract than criterion validity
  • convergent = where a measure of constuct x correlates w other measures of the same construct (eg using participant observaition and a survey to assess anger) - change in the same way
  • discriminant = a measure of constuct x does not correlate with measuements of dissimalar/unrelated constructs (eg measurement of age should not change in the same was as a survey measurement for anger) - predicting change in one instrument while the other stays the same
34
Q

define accuracy

A
  • a measure of how close a measurement comes to a true score for a variable
  • ie how accurately a measure measures what you want it to
34
Q

what is something that can greatly enhance instrument interpretability?

A
  • knowing MID (minimally important difference)
  • this is the smallest differnce in the score that informed patients have perceived as important, leading patients or clinicians to consider a change in management
35
Q

inter- vs intra- rater reliability

A
  • both = test-retest (need a time 1 and time 2 measurement - either by same person or diff person)
  • inter = between 2 different people and how well they agree
  • intra = measuring the same thing at diff times of the day for example
36
Q
  • describe mean difference
A
  • systematic difference between groups (ie not at the individual level!)
  • t-test will give us a p-value saying whther there is a statistically significant diff btw the 2 group means, but thats it
  • closer to 0 is better
  • ie take 100 patients and measure at t1 then colleague measures at t2 and compare results
38
Q

what is validity

A

the extent to which an instrument measures what it is intended to measure

40
Q

informing applicability - what is a sensativity analysis?

A
  • substituting uncertainty in cost based on differences btw places (or a refelction about the uncertainty of the analysis - ie uncertainty around treatment effect)
  • helps us to increase readership in terms of applicability
  • uncertainty around many things, could be methods of administration, unsure about proportions of patients who will experience an adverse effect, etc
41
Q

what terms go together: accuracy, precision, validity, and reliability?

A

accuracy = validity

precision = reliability

43
Q

what is reliability

A

the extent to which an instrument yields the same results in repeated administrations in a stable population,

44
Q

explain pros and cons of trying to improve precision with increased measurements (n-size)

A

pro: reduces the amount of random error in the study, narrows the CI
con: if experiment contains systematic errors (procedural or measurement), these are not corrected by increasing n-size, you are simply increasing your ability to reproduce a measurement of the wrong thing!

44
Q
  • describe the anchor-based approach for responsivness
A
  • measure at T1 before anything has changed, then again at T2 after (ie at time 2 use original Q with the a global rating of change scale)
  • can get idea of MCID (difference in averages at t1 and t2 for people who scored 2 or 3 on GRC
  • can also give yourself some construct validity using this method
  • for people who’s scores are the same, can’t use them for responsiveness but can use for reliability
  • see notes pg 23, slide 1 and 2
45
Q

define content validity

A
  • representative of the content domains of the construct (experts)
  • the same thing as face validity but from experts (more broad experience w disease as opposed to a single patient)
47
Q

define cost minimization analysis

A
  • between 3b and 4 on chart
  • when the effect of treatment is similar across groups (no longer need ot consider effect bc we know its the same, so this is better than strict cost analysis)
48
Q

what are spearman and pearsons r examples of?

A

ctiterion validity

  • look at how strongly related or correlated 2 measures are (expected and new measure)
49
Q

what is generic heath related QOL

A
  • measures general health status, very vague, can span across diff medical conditions (can compare across diff states of health)
  • relevent to all health states
  • eg SF-12
51
Q

what is an evaluative instrument?

A

used to evaluate change, can track change over time (must have properties that can detect change) - therapy studies use this

52
Q

what are the 4 features of a good outcome measure?

A
  1. validity
  2. reliability
  3. sensitivity to change
  4. responsiveness
54
Q

what are the 4 ways of measuring validity?

A
  1. face
  2. content
  3. criterion
  4. construct
55
Q

another way of defininf costs = methods of evaluation - review chart!

A
  • top left 2, if there is no comparison group
  • bottom left = RCT
  • bottom right = study with more than 1 group and also includes both cost and effectiveness
  • rarely see just cost analysis
56
Q

what is association

A
  • how 2 things change according to each other
  • good for validity (btw 0 and 1, 1 being perfect association)
  • can’t have more validity than reliability
57
Q

define: cost utility analysis - common measures

A
  • the value you place on health benefits and avoiding poor health outcomes (measuring the value people place on certain health outcomes)
  • how they would value avoiding another poor outcome - not direct, requires different measures
  • can measure impacts of different interventions on different diseases
  • common measure EQ5D (most common) or HUI or QALY - quality of adjusted life years (utility!)
58
Q

difference btw kappa and weighted kappa

A
  • kappa is dischotomous (categorical) and weighted kappa is ordered
59
Q

what is responisveness

A

the ability to measure clinically meaningful change

60
Q

what makes a surrogate endpoint valid?

A
  • a causal relationship btw changes in the surrogate and changes in the patient important outcome (strongly predictave)
62
Q

what are the common summary measures for reporting reliability? association or agreement for bottom 3?

A
  • mean difference and standard error of measurement (for both the greater deviation from 0 the worse the agreement)
  • pearson’s R (ass), interclass correlation coefficient (agree), and kappa/weighted kappa (agree) (range from 0 no agreement to 1 perfect agreement)
63
Q
  • what is sensitivity to change (STC)/how is it measured?
A
  • often represented by standard response mean (SRM)
  • administered before and after change in a population expected to change
  • calculate mean change (T1avg-T2avg) over SD change (>1 = good)
  • this is saying whether we can see the signal over the noice (>1) and if so, we have an instrument that is sensitive to change
64
Q

ICF model of disability

A
66
Q

define: cost benefit

A
  • value of resources used up compared to those saved or created (eg willingness to pay)
  • rare to use this one
67
Q

what are dichotomous outcomes? disadvantages/advantages to this?

A
  • referred to as events (dead/alive, healed/not healed, etc)
  • disadvantage = can’t detect change easily
  • advantages = easily interpretable (even without CI’s)