WEEK THREE: MEASUREMENT, RELIABILITY, VALIDITY Flashcards
Measurement
if a thing exists in some amount; and if exists in some amount, it can be measured
Count
the number of occurrence of an event
Ratio
- relationship between 2 numbers
- Numerator not necessarily included in the denominator
- Ex. (binary) sex ratio
Odd
The probability of an event occurring relative to it not occurring
Rate
*Speed of occurrence of an event over time
*Numerator: no of EVENTS observed for a given time
*Denominator: population in which the events occur
Prevalence Rate
the proportion of the population (or population sample or sample subset) that has a given disease or other attribute at a specified time
* Obtained from cross sectional studies
Formula of Point prevalence rate
= # with disease at specific time/ population at same time
Period Prevalence Rate Formula
with disease at specific time period/ total defined population at same period
What is the Incidence Rate
the proportion of the population at risk that develops a given disease or other attribute during a specified time period
* Obtained from cohort (longitudinal) studies
Formula of Incidence Rate
of new events during a specific time period/population “at risk”
What are the 4 hallmarks of Health studies
1) A research question/plausible theory
2) A well thought design to address the research question
3) Measurement of exposure and outcome
4) Analysis to compare groups
Relative Risk
- Tells us how many times as likely is that someone who is ‘exposed’ to something will experience a particular health outcome compared to someone who is not exposed
- Tells us about the strength of an association
- Can be calculated using any measure of disease occurrence e
o Prevalence
o Incidence rate
Random Error vs. Systematic Error
random: due to chance
systematic: due to recognizable source
- can have both
Calculation of relative risk
=[a/(a+b)] / [c/c+d)]
Non Differential Misclassification
- when the probability of individuals being misclassified is equal across all groups in the study
- Usually weakens associations – brings effect estimates (RR, OR, AR) closer to the null value
- 10%
Differential Misclassification
misclassification of exposure is not equal between subjects that have or do not have the health outcome, or when misclassification of the health outcome is not equal between exposed and unexposed subject
- 20%
Precision VS. Accuracy In Measurement
A measurement tool/scale with high precision is reliable
A measurement tool/scale with high accuracy is valid
Insufficient precision occurs bc
- The measurement tool is not precise enough
- (independent) interviewers rate the same person differently using the same scale (inadequate training)
- The sane interviewer rates the same person differently
incidence vs prevalence
Incidence
Measures frequency of disease onset
- What is new
Prevalence
Measures population disease status
- What exists
Internal Consistency
- present when the items in a survey measure various aspects of the same concept
Ex. “i enjoy eating most fruits” and “i do not like to eat fruit”. If the answer to the 1st question is yes, the answer to the second should be no. This is evidence that the responses are reliable.
Cronbach’s Alpha
measure of internal consistency used with variable that have ordered responses
Expressed with a 0 or 1
Reliability
refers to whether a n assessment instrument gives the same results each time it is used in the same setting with the same type of subjects
Means considerable or dependable results
Part of the assessment of validity
Validity
Refers to how accurately a study answers the study question or the strength of the study conclusions
For outcome measures (i.e surveys or tests), validity is the accuracy of measurement
How can Researchers Enhance the Validity of their Assessment Instruments
- Do a literature search and use previously developed outcome measures
- If the instrument must be modified for use with subjects or setting, modify and describe how
- Test reliability
Interrater Reliability
studies the effect of different raters or observers using the same tool and is estimated by percent agreement, kappa (for binary outcomes) or Kendall Tau
Construct Validity
Judgment based on the accumulation of evidence form studies using a specific measuring instrument
Content Validity
Addresses how well the items developed to operationalize a construct provide an adequate and representative sample of all the items that measure the construct
Item Response Theory
Provide an alternative for understanding measurement + strategies for judging the quality of measuring instrument
Face Validity
present when content experts and users agree that a survey instrument will be easy for study participants to understand and correctly complete
Kuder- Richardson Formula 20 (KR-20)
measure of internal consistency used with binary variable
Expressed with a 0 or 1
Scores near 1 indicate an assessment tool with minimal random error and high reliability