Week 1 Flashcards
System Validation
Ensures that measurement tools provide accurate and reliable data
Concurrent Validation
Assessment of a new system compared to an existing system while recording simultaneously
Accuracy
Ability to measure a quantity close to its true value approximated with ‘gold-standard’ system
Reliability
The extent to which measurements can be replicated; consistency or reproducibility
Precision
Spread of repeated measurements
Measuring accuracy
Agreement and Correlation
- Agreement is the more appropriate estimate of accuracy
Agreement
How closely the measurements of two systems match each other across a range of values. Assesses if measurement of the same variable by two different systems produce similar results
Pearson Correlation
Measures how strong pairs of variables are related and only interpreted if p-value is significant
How is pearson correlation a misleading measurement of agreement
- Only measures the linear relationship of two sets of observations
- Does not measure bias or systematic difference between systems
- Can consistently be off by n number of units and still show a strong correlation
Bland-Altman Analysis
Analysis with 95% limits of agreement (LOA) measures the agreement between two measurement systems instead of comparing the new system to a perfect system. This provides interchangeability between the reference and the new system.
how to calculate 95% LOA
95% LOA = bias +/- 1.96 x SD of the differences
Challenges with BA analysis
- N decisive p-value
- No standard for acceptable LOA
- Researchers must establish a priori acceptability limits based on clinical relevance or prior studies to define the boundaries
Priori acceptability limits
- Determined based on an expected difference between healthy and unhealthy populations prior to study
- The comparison between health and unhealthy populations provides some level of quantification of the acceptability criteria used for future use case of the new measurement system
- If the calculated LOA are narrower than the a priori acceptability criteria, then we can conclude acceptable agreement
BA Analysis Steps
- Select a priori acceptability criteria
- Calculate the difference between system A and system B
- Calculate the mean of system A and system B for each trial
- Calculate the mean difference (Bias)
- Calculate the 95% limits of agreement (LOA)
- Plot the mean (x-axis) against the difference (y-axis)
- plot the calculated LOA
- Evaluate:
- Bias
- Calculated LOA vs acceptability criteria
Reliability Theory
- All measurements are made with some degree of error
- Any time a score is recorded the observed score is the sum of the true score and error
Forms of measurement Error
RANDOM ERROR: noise or unpredictable error - averages out to zero over time/ can be mitigated by multiple measurements
SYSTEMIC ERROR: Scores trend up or down over multiple measurements - directional and usually solved by simple addition or subtraction
Test re-test reliability
- To assess if scores are similar overtime
- The measurement is administered to a sample of participants and then repeated at least once at some other time
reliability equation
reliability = true variance/ (true variance + error variance)
Intraclass correlation coefficient
Quantifies reliability
- calculates an index that comprises the same variable measured on multiple occasions; withing a group/class
Interclass correlation coefficient
Correlation which assess if two variables from different classes are correlated
What are the 3 models of ICC
Model 1: Some participants are measured by different raters
Model 2: Participants are measured by the same raters, but can generalize the reliability to other raters of the same type
Model 3: Participants are measured by the same set of raters, but the raters are the only raters of interest
What are the two types of ICCs
Type 1: single measurement
Type k: mean of k measurement or trials
Types of challenges with test re-test reliability
Participant factors
Equipment and measurement factors
Environment and protocol factors
Participant factors of test re-test reliability
BIOLOGICAL VARIABILITY
- Natural fluctuations in trial-to-trial movement patterns
- Population considerations, health/skill status
- Solution: Repeated trails, increased sample size, warm-up, inclusion/exclusions criteria
LEARNING EFFECT
- Task adaptation through repeated trials
- Solution: Familiarization trials, randomization of conditions
FATIGUE, RECOVERY, AND PSYCHOLOGICAL:
- cumulative fatigue, motivation/boredom
- solution: spacing of trials, adequate rest between sessions, clear instructions
Equipment and measurement factors of test re-test reliability
INTRUMENT PLACEMENT CONSISTENCY
- small misalignment of sensors results in different outputs
- Solution: use anatomical landmarks, same personnel for set-up, mark placements
SYSTEM CALIBRATION SIGNAL DIFT
- Incorrect calibration or discounting sensor drift results in different outputs
- Solution: pre-collection calibration, drift measurements
SAMPLING RATE
- low sampling rate may miss movement patterns or events of interest in the signal
- Solution: consult literature, pilot collections
Environment and protocol factors in test re-test reliability
TESTING ENVIRONMENT
- Surface variability, attire, room temp, electrical interference
- Solution: consistent environmental conditions and attire, standardized protocol, checklist
TIMING
- long intervals between testing can introduce physiological changes (e.g. strength, gains, injury)
- Solution: minimize spacing between testing, consistent time of day