21 - Repeated Assessments Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Benefits of Repeated Assessments

A
  • Examines within-person change to evaluate causality

- Examine time effects (development, sensitive periods)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Possible Reasons for X-Y Correlation

A
  1. X causes Y
  2. Y causes X
  3. Confound: a third variable influences both X, Y
  4. Spurious: false
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Benefit of Longitudinal Design

A
  • Whether predictor predicts outcome

- Uses person as their own control (within-person)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Internal Validity

A

Extent to which a causal inference is justified from the research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Longitudinal Approaches, Ranked Worst to Best

A
  1. Cross-Sectional: one timepoint, cannot establish causality
  2. Lagged Association: X at T1 predicts Y at T2
  3. LA Controlling for Prior Levels: X at T1 predicts Y at T2 controlling for Y at T1 (predicts change)
  4. LACfPL, Testing Both Direction of Effects Simultaneously: demonstrates chicken/egg and directionality (if bidirectional, magnitude of each)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Test of Mediation

A
  • Evaluate whether X predicts M at a later time point when controlling for earlier M; and whether X and Y are better explained by M
  • Use MEM or SEM
  • Challenge: time points on scales may differ
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Use if you want to observe growth over time

A

RAW SCORES (not tranformed/normed scores)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Score Tranformations/Norms

A

T: mean 50, SD 10
Z: mean 0, SD 1
Standard: mean 100, SD 15

*DON”T ALLOW YOU TO OBSERVE GROWTH: for that, you need raw scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Strengthening Inference of Change

A
  • Large magnitude between scores
  • Measurement error (unreliability) at both timepoints is small (reduce unreliability by combining multiple measures)
  • Measurement is invariant across time (same meaning, comparable scale)
    - Assess using SEM (intercept, factor loading) or IRT (differential item functioning, or difficulty/discrimination)

-No evidence for confounds of change (practice effects, cohort effects, time-of-measurement effects)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Difference Scores

A

-Less reliable than each individual measure; more so if they’re correlated

  • Depends on:
    1. reliability of individual measures
    2. if variability of true individual differences is large
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Nomothetic vs Idiographic

A

Nomothetic: population approach; assuming homogeneity (more accurate, more generalizable)

Idiographic: individual approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Structural Equation Modeling

A

Allows for multiple dependent variables to help assess causality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Cross-Sectional Design

A
  • Multiple participants at one timepoint
  • Interest: age differences
  • Confounds: cohort differences
  • Limitation: does not show change over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cross-Sectional Sequence Design

A
  • Successive studies of different participants at different times
  • Interest: age differences
  • Confounds: cohort differences, time-of-measurement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Time-Lag Design

A
  • Participants from different cohorts assessed at same AGE
  • Interest: cohort differences
  • Confounds: time-of-measurement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Longitudinal Design

A
  • Same participants measured at multiple timepoints
  • Interest: age differences
  • Confounds: cohort effects and time-of-measurement
17
Q

Longitudinal Sequences Design

A
  • Following multiple cohorts across time
    1. Time sequential: multiple ages assessed at multiple times
    2. Cross sequential: multiple cohorts assessed at multiple times
    3. Cohort sequential: multiple cohorts assessed at multiple ages

*Remember the triangle!

18
Q

Heterotypic, Homotypic, Phenotypic Continuity & Discontonuity

A
  • Homotypic: same process, same behavior across time
  • Heterotypic: same process, different behavior
  • Phenotypic: different process, same behavior
  • Discontinuity: different process, different behavior
19
Q

Identifying Heterotypic Continuity

A
  1. Rank order stability: changes in degree of stability
  2. Content level on the construct (IRT-difficulty, SEM-intercept)
  3. How strongly the content reflects the construct (IRT-discrimination, SEM-factor loading)
20
Q

Assessing a construct across development

A
  1. All possible content
    • Pros: comprehensive, allows you to examine change in each facet
    • Cons: Inefficient, intrusions
  2. Only the common content
    • Pros: efficient, may exclude inappropriate content, same measure for easy interpretation
    • Cons: fewer items, loss of info, gaps
  3. Only the construct-valid content
    -Pros: efficient, retains construct & content validity
    -Cons: time-intensive, uses different measures (so harder to compare)
    RECOMMENDED!
21
Q

Ensuring Statistical Equivalence

A
  • AKA: same mathematical metric
  • Use Developmental scaling: measures that differ in difficulty and discrimination are on same scale with all content (age-common and age-different) to estimate each person’s score on that scale

Developmental Scaling Approaches:

  • SEM: allows estimation of latent variable with different content across time
  • IRT: links measures’ scales based on difficulty and discrimination of common content
22
Q

Ensuring Theoretical Equivalence

A
  • Construct validity invariance = content reflects the construct at each age
  • Should show content validity (sample all aspects of the construct)
  • Test-retest reliability in short term
  • Convergent and discriminant validity of measures
  • Similar factor structure across time based on factor analysis
  • High internal consistency reliability
23
Q

Confounds of Change

A
  1. Practice effects
  2. Cohort effects
  3. Time of measurement
24
Q

Inferring Development

A

Would need to do all 3 longitudinal sequential designs and see that age-related differences were stronger than cohort- and time-of-measurement differences

Establish longitudinal measurement invariance:

  1. SEM: same intercepts and factor loadings across ages
  2. IRT: same difficulty and discrimination across ages