chapter 5 Flashcards

1
Q

Reliability of Measures

A
  • Reliability of Measures: refers to consistency or stability of measure (same or very similar results every time used)
  • Measurement error: every time you measure something, you capture the true score and bits of error. The more reliable a measure is, the less measurement error present
  • Crucial for any operational definition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Measurement error formula

A

X= T+E
X= The measurement we collect
T= The true measurement (idea situation with no error)
E= error (difference between measured score and true score)
- Unsystematic errors: motivation mood testing environment
- Measurement error: test issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Reliability and Accuracy of Measures

A
  • A measure can be highly reliable
    but not accurate
  • Reliability indexes indicate amount
    of error but not accuracy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Achieving Reliability

A
  • Train observers well: ex. observational study on aggression in hockey - before sending them in, must clarify what aggression looks like
  • Word questions well: make sure participants understand what is being asked of them
  • Calibrate and place equipment well: Test equipment under different conditions so you can get consistent readings
  • Observe construct multiple times: more items= more reliable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reliability vs. Validity

A

Reliable: consistent in measurement

Validity: actually accurate asses variable of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
A

Risk of attenuation (weakening results)
* Solutions:
* Use measures that have established reliability (predetermined measures)
* Use well-trained coders
* Do separate studies just on the reliability of the measure (study the studies)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three ways we test reliability

A
  • INTERNAL CONSISTENCY
  • TEST-RETEST RELIABILITY
  • INTERRATER RELIABILITY
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A
  • each item is a repeated attempt to test a concept
  • Cronbach’s alpha
    Consistency of the items on a multi-item measure that measure the same variable
  • Questionnaires and rating scales
  • Split-half reliability (take first half of items on measure, then compare them to second half)
  • Measured by a reliability coefficient: (compare how each item correlates on score with each of the other measures)
  • Item-total correlations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Correlation Coefficients

A
  • Asses stability of measure by using correlation coefficients
  • look at how strong items of a mesure are
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Test retest reliability

A

-Testing same people in same conditions at two points over time
- Measure is given at two time points and scores are compared to measure consistency
* High positive correlation coefficient shows two sets of scores very similar
* Measured by correlation coefficient (r)
* Alternate forms reliability
* May appear higher than it is if people remember previous answers
- You need at least 0.7 or 0.8
Issue? If you administer the same test twice, people can remember answers from the past.
Solution?Do half of the test at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Types of reliability: interrater reliability

A
  • how much do the raters agree
  • what is the average % of agreement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Book summary

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the three types of validity

A

How well does the mesure mesure what its supposed to measure
- Focuses on measurement accuracy
*CONSTRUCT VALIDITY
*INTERNAL VALIDITY
*EXTERNAL VALIDITY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Construct validity

A

How well the measures represent the variable(s)
* A reflection of the quality of a researcher’s operational
definitions
* Good construct validity: hunger
* Not so good construct validity: How well they read menu

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Different ways to look at construct validity

A

Face validity (based on theoretical): The extent to which the content of the measure appears to reflect the construct being measured (give it to an expert or non-expert, ask them to tell you what they think it’s for)

Content validity (based on theoretical): The extent to which the entire set of items represents all aspects of the
topic and nothing else; The breadth of the instrument. Does the survey capture everything, or are we missing something?
- Judged by an expert

Predictive validity (based on constructs): extent to which the measure predicts future behaviours or situations it would predict.
Established:
Collect data
* After time, measure & compare with
your measure;
* correlate or see how accurately first
measurement predicted criterion

Concurrent Validity: The extent to which the measure relates to a criterion behaviour that occurs at the same time as the measurement.
(behaviours related to construct of interest)
How established:
take two tests at same time… correlate them
* Collect data with your measure
* Compare to current criterion:
correlate or see how accurately
measure identifies criterion

Convergent Validity : See if measures relate to other measures of similar constructs
Met if they are related

Convergent delivery: show that measures that should be related are related

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
A

Internal validity= study links that internal validity is linked with external validity

Ensure that only the
independent variable
can be the cause of the
changes in the
dependent variable

17
Q

confounding variables

A
  • Other factors are introduced to study-unclear if they cause the outcome
  • vary alongside independent variable
  • ## Is an uncontrolled variable
18
Q

Threats to internal validity

A

Look at other research, see what variables they controlled for
- History effect: Historical event that affects all or most participants (results skewed)
- Use control groups, shorter studies…
- The thing that occurred isn’t what the research wants to explore
- particularly an issue with multiple groups
- it undermines internal validity, (confounding variable)
ex. covid

19
Q

Maturation effect

A
  • Natural changes in participants
  • Hormones aging life stages cognitive function that occurs to participants (not everyone matures the same way)
  • it undermines internal validity
  • alternative explanation for diff between pre test and post test
20
Q

Testing

A
  • taking a pretest influences people’s responses on a postest
  • practice effect
  • Solution?
  • seperate test, split in two…
21
Q

Instrument decay

A
  • Measurement changes with repeated use becomes less relevant
  • the test may not measure what it’s intending to measure anymore
22
Q

regression to the mean

A
  • make natural variation to data look like large change
  • Happens in unusual large or small measurement
  • Tendency for extreme measurement scores
    to be less extreme the next time they are
    measured
  • Undermines internal validity
  • If initial score is outlier than the next time they are tested they will be more likely to be closer to the mean
23
Q

Mortality (attrition)

A
  • participants leave study
  • The people who dropped out might have shared characteristics
  • can create limitations to the study (biased outcomes)
24
Q

Cohort effect

A
  • groups divided by differing age or generation
  • cross-sectional testing
  • Address cohort effect by doing longitudinal tests or doing “age matching”
25
Q
A