Week 1 Flashcards

1
Q

System Validation

A

Ensures that measurement tools provide accurate and reliable data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Concurrent Validation

A

Assessment of a new system compared to an existing system while recording simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Accuracy

A

Ability to measure a quantity close to its true value approximated with ‘gold-standard’ system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Reliability

A

The extent to which measurements can be replicated; consistency or reproducibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Precision

A

Spread of repeated measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Measuring accuracy

A

Agreement and Correlation
- Agreement is the more appropriate estimate of accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Agreement

A

How closely the measurements of two systems match each other across a range of values. Assesses if measurement of the same variable by two different systems produce similar results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Pearson Correlation

A

Measures how strong pairs of variables are related and only interpreted if p-value is significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is pearson correlation a misleading measurement of agreement

A
  • Only measures the linear relationship of two sets of observations
  • Does not measure bias or systematic difference between systems
  • Can consistently be off by n number of units and still show a strong correlation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Bland-Altman Analysis

A

Analysis with 95% limits of agreement (LOA) measures the agreement between two measurement systems instead of comparing the new system to a perfect system. This provides interchangeability between the reference and the new system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how to calculate 95% LOA

A

95% LOA = bias +/- 1.96 x SD of the differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Challenges with BA analysis

A
  • N decisive p-value
  • No standard for acceptable LOA
  • Researchers must establish a priori acceptability limits based on clinical relevance or prior studies to define the boundaries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Priori acceptability limits

A
  • Determined based on an expected difference between healthy and unhealthy populations prior to study
  • The comparison between health and unhealthy populations provides some level of quantification of the acceptability criteria used for future use case of the new measurement system
  • If the calculated LOA are narrower than the a priori acceptability criteria, then we can conclude acceptable agreement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

BA Analysis Steps

A
  1. Select a priori acceptability criteria
  2. Calculate the difference between system A and system B
  3. Calculate the mean of system A and system B for each trial
  4. Calculate the mean difference (Bias)
  5. Calculate the 95% limits of agreement (LOA)
  6. Plot the mean (x-axis) against the difference (y-axis)
  7. plot the calculated LOA
  8. Evaluate:
    - Bias
    - Calculated LOA vs acceptability criteria
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reliability Theory

A
  • All measurements are made with some degree of error
  • Any time a score is recorded the observed score is the sum of the true score and error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Forms of measurement Error

A

RANDOM ERROR: noise or unpredictable error - averages out to zero over time/ can be mitigated by multiple measurements
SYSTEMIC ERROR: Scores trend up or down over multiple measurements - directional and usually solved by simple addition or subtraction

17
Q

Test re-test reliability

A
  • To assess if scores are similar overtime
  • The measurement is administered to a sample of participants and then repeated at least once at some other time
18
Q

reliability equation

A

reliability = true variance/ (true variance + error variance)

19
Q

Intraclass correlation coefficient

A

Quantifies reliability
- calculates an index that comprises the same variable measured on multiple occasions; withing a group/class

20
Q

Interclass correlation coefficient

A

Correlation which assess if two variables from different classes are correlated

21
Q

What are the 3 models of ICC

A

Model 1: Some participants are measured by different raters
Model 2: Participants are measured by the same raters, but can generalize the reliability to other raters of the same type
Model 3: Participants are measured by the same set of raters, but the raters are the only raters of interest

22
Q

What are the two types of ICCs

A

Type 1: single measurement
Type k: mean of k measurement or trials

23
Q

Types of challenges with test re-test reliability

A

Participant factors
Equipment and measurement factors
Environment and protocol factors

24
Q

Participant factors of test re-test reliability

A

BIOLOGICAL VARIABILITY
- Natural fluctuations in trial-to-trial movement patterns
- Population considerations, health/skill status
- Solution: Repeated trails, increased sample size, warm-up, inclusion/exclusions criteria
LEARNING EFFECT
- Task adaptation through repeated trials
- Solution: Familiarization trials, randomization of conditions
FATIGUE, RECOVERY, AND PSYCHOLOGICAL:
- cumulative fatigue, motivation/boredom
- solution: spacing of trials, adequate rest between sessions, clear instructions

25
Q

Equipment and measurement factors of test re-test reliability

A

INTRUMENT PLACEMENT CONSISTENCY
- small misalignment of sensors results in different outputs
- Solution: use anatomical landmarks, same personnel for set-up, mark placements
SYSTEM CALIBRATION SIGNAL DIFT
- Incorrect calibration or discounting sensor drift results in different outputs
- Solution: pre-collection calibration, drift measurements
SAMPLING RATE
- low sampling rate may miss movement patterns or events of interest in the signal
- Solution: consult literature, pilot collections

26
Q

Environment and protocol factors in test re-test reliability

A

TESTING ENVIRONMENT
- Surface variability, attire, room temp, electrical interference
- Solution: consistent environmental conditions and attire, standardized protocol, checklist
TIMING
- long intervals between testing can introduce physiological changes (e.g. strength, gains, injury)
- Solution: minimize spacing between testing, consistent time of day