W9 Reliability Flashcards
What is the definition of reliability?
- It is the consistency of measurements or the absence of measurement error (Atkinson & Nevill, 1998)
What are the two types of Error Measurement?
- Systematic Error
- Consistent error that biases the true score
- Won’t impact reliability - Random Error
- Unpredictable error that biases the true score
- Will impact reliability
Can you give three sources of error?
- Participant
- Researcher
- Instrument
Can you give a couple of ways to reduce error?
- Test multiply times or repeat measures
- Compare research to more than two other researchers research
- Have a trained researcher to ensure correct use of instrument
- Choice of instrument
- Following protocols
- Having good protocols
What are the two types of reliability and what do they mean?
- Relative reliability
- The degree to which data maintained its position within the data with repeated measures - Absolute reliability
- The degree to which data varies over repeated measures
- What is test retest reliability?
- What test could you use to test retest reliability?
- It is the reliability/stability across measurement occasions
- Correlate the scores obtained from a group of participants from two or more occasions
- The test you could use to compare the relationship of two or more scores is the Pearson’s Correlation Coefficient
What is inter-rater reliability?
- Inter-rater reliability is the reliability/consistency between researchers
- Correlate the scores from the same group between different researchers
- What is internal consistency?
- How do you measure internal consistency?
- What do the values range between on this measurement tool?
- It is the reliability across different areas of measurement instruments
- The way it is measured is using Cronbach’s alpha reliability coefficient
- The measures range between 0 & +1.0
What are the four ways to measure absolute reliability?
- Technical error of measurement
- Standard error of measurement
- Coefficient of variation
- Limits of agreement
What is the definition of validity?
It describes the degree to which a test or instrument measures what it is meant to measure
What are the four types of measurement validity?
- Face validity
- Content validity
- Construct validity (Convergent & Discriminant)
- Criterion validity (Concurrent & Predicative)
Validity can either be…
…internal or external.
Can you describe what Face validity is?
- The method of measuring data obviously involves collecting the factor being measured.
E.g. 100m sprint - time to complete measured by timing gates. So on the face of it timing gates are good measures of speed/validity
Can you describe content validity?
- The instrument adequately covers the subject of measurement
- Instrument covers all aspects of relevance from population
- Can you describe what construct validity is?
- What are the two ways of assessing constructive validity?
- Constructive validity assesses to what extent an instrument accurately measures the hypothetical constructs
1. Convergent - Scores on an instrument that are similar to those on another measure
2. Discriminant - Scores on an instrument are different from a measurement on a instrument that is measuring another score
- Can you describe what citron validity is?
- What are the two types of citron validity?
- What does this tell us?
- Scores on a instrument are related to those on a previously validated measure
1. Concurrent - Scores collected at roughly the same time
2. Predicative - Citron instrument completed at a later date
- It tells us whether we can predict gold standard measures
Relationship between reliability & validity:
- Can an instrument be reliable but not valid?
- Yes, it could be reliability (consistently) measuring the wrong thing
Relationship between reliability & validity:
- Can an instrument be valid but not reliable?
- No, if it is measuring what it is supposed to measure it should consistently give the same score.
- So reliability is a necessary but not sufficient condition for validity.
What does internal validity mean?
- Refers to the ability to attribute changes in the dependent variable to the manipulation of the independent variable
What does external validity mean?
- Refers to the ability to generalise the results of a study to other settings and other individuals
Can you name a few threats to internal validity and provide a solution?
- Maturation (Age/growth) = include control group
- Selection bias (participants allocated groups) = randomly allocated
- Expecting specific results = blinding/double blind
- Schedule of testing = counterbalance treatments or familiarisation
- Measurement and equipment = frequent calibration
- Mortality (drop out/with drawl) = account for this within sample size
Can you name a few threats to external validity?
- Reactive or interactive effects of testing
- Pre-test makes a participant more aware or sensitive to the treatment - Interaction of selection bias and the treatment
- treatment is effective only in the group selected - Reactive effects of experimental arrangements
- Multiple-treatment inference