W9 Reliability Flashcards

1
Q

What is the definition of reliability?

A
  • It is the consistency of measurements or the absence of measurement error (Atkinson & Nevill, 1998)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two types of Error Measurement?

A
  1. Systematic Error
    - Consistent error that biases the true score
    - Won’t impact reliability
  2. Random Error
    - Unpredictable error that biases the true score
    - Will impact reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can you give three sources of error?

A
  1. Participant
  2. Researcher
  3. Instrument
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Can you give a couple of ways to reduce error?

A
  • Test multiply times or repeat measures
  • Compare research to more than two other researchers research
  • Have a trained researcher to ensure correct use of instrument
  • Choice of instrument
  • Following protocols
  • Having good protocols
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the two types of reliability and what do they mean?

A
  1. Relative reliability
    - The degree to which data maintained its position within the data with repeated measures
  2. Absolute reliability
    - The degree to which data varies over repeated measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  • What is test retest reliability?

- What test could you use to test retest reliability?

A
  • It is the reliability/stability across measurement occasions
  • Correlate the scores obtained from a group of participants from two or more occasions
  • The test you could use to compare the relationship of two or more scores is the Pearson’s Correlation Coefficient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is inter-rater reliability?

A
  • Inter-rater reliability is the reliability/consistency between researchers
  • Correlate the scores from the same group between different researchers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  • What is internal consistency?
  • How do you measure internal consistency?
  • What do the values range between on this measurement tool?
A
  • It is the reliability across different areas of measurement instruments
  • The way it is measured is using Cronbach’s alpha reliability coefficient
  • The measures range between 0 & +1.0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the four ways to measure absolute reliability?

A
  1. Technical error of measurement
  2. Standard error of measurement
  3. Coefficient of variation
  4. Limits of agreement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the definition of validity?

A

It describes the degree to which a test or instrument measures what it is meant to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the four types of measurement validity?

A
  1. Face validity
  2. Content validity
  3. Construct validity (Convergent & Discriminant)
  4. Criterion validity (Concurrent & Predicative)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Validity can either be…

A

…internal or external.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Can you describe what Face validity is?

A
  • The method of measuring data obviously involves collecting the factor being measured.
    E.g. 100m sprint - time to complete measured by timing gates. So on the face of it timing gates are good measures of speed/validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can you describe content validity?

A
  • The instrument adequately covers the subject of measurement
  • Instrument covers all aspects of relevance from population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  • Can you describe what construct validity is?

- What are the two ways of assessing constructive validity?

A
  • Constructive validity assesses to what extent an instrument accurately measures the hypothetical constructs
    1. Convergent
  • Scores on an instrument that are similar to those on another measure
    2. Discriminant
  • Scores on an instrument are different from a measurement on a instrument that is measuring another score
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  • Can you describe what citron validity is?
  • What are the two types of citron validity?
  • What does this tell us?
A
  • Scores on a instrument are related to those on a previously validated measure
    1. Concurrent
  • Scores collected at roughly the same time
    2. Predicative
  • Citron instrument completed at a later date
  • It tells us whether we can predict gold standard measures
17
Q

Relationship between reliability & validity:

- Can an instrument be reliable but not valid?

A
  • Yes, it could be reliability (consistently) measuring the wrong thing
18
Q

Relationship between reliability & validity:

- Can an instrument be valid but not reliable?

A
  • No, if it is measuring what it is supposed to measure it should consistently give the same score.
  • So reliability is a necessary but not sufficient condition for validity.
19
Q

What does internal validity mean?

A
  • Refers to the ability to attribute changes in the dependent variable to the manipulation of the independent variable
20
Q

What does external validity mean?

A
  • Refers to the ability to generalise the results of a study to other settings and other individuals
21
Q

Can you name a few threats to internal validity and provide a solution?

A
  • Maturation (Age/growth) = include control group
  • Selection bias (participants allocated groups) = randomly allocated
  • Expecting specific results = blinding/double blind
  • Schedule of testing = counterbalance treatments or familiarisation
  • Measurement and equipment = frequent calibration
  • Mortality (drop out/with drawl) = account for this within sample size
22
Q

Can you name a few threats to external validity?

A
  1. Reactive or interactive effects of testing
    - Pre-test makes a participant more aware or sensitive to the treatment
  2. Interaction of selection bias and the treatment
    - treatment is effective only in the group selected
  3. Reactive effects of experimental arrangements
  4. Multiple-treatment inference