W9 Reliability Flashcards
What is the definition of reliability?
- It is the consistency of measurements or the absence of measurement error (Atkinson & Nevill, 1998)
What are the two types of Error Measurement?
- Systematic Error
- Consistent error that biases the true score
- Won’t impact reliability - Random Error
- Unpredictable error that biases the true score
- Will impact reliability
Can you give three sources of error?
- Participant
- Researcher
- Instrument
Can you give a couple of ways to reduce error?
- Test multiply times or repeat measures
- Compare research to more than two other researchers research
- Have a trained researcher to ensure correct use of instrument
- Choice of instrument
- Following protocols
- Having good protocols
What are the two types of reliability and what do they mean?
- Relative reliability
- The degree to which data maintained its position within the data with repeated measures - Absolute reliability
- The degree to which data varies over repeated measures
- What is test retest reliability?
- What test could you use to test retest reliability?
- It is the reliability/stability across measurement occasions
- Correlate the scores obtained from a group of participants from two or more occasions
- The test you could use to compare the relationship of two or more scores is the Pearson’s Correlation Coefficient
What is inter-rater reliability?
- Inter-rater reliability is the reliability/consistency between researchers
- Correlate the scores from the same group between different researchers
- What is internal consistency?
- How do you measure internal consistency?
- What do the values range between on this measurement tool?
- It is the reliability across different areas of measurement instruments
- The way it is measured is using Cronbach’s alpha reliability coefficient
- The measures range between 0 & +1.0
What are the four ways to measure absolute reliability?
- Technical error of measurement
- Standard error of measurement
- Coefficient of variation
- Limits of agreement
What is the definition of validity?
It describes the degree to which a test or instrument measures what it is meant to measure
What are the four types of measurement validity?
- Face validity
- Content validity
- Construct validity (Convergent & Discriminant)
- Criterion validity (Concurrent & Predicative)
Validity can either be…
…internal or external.
Can you describe what Face validity is?
- The method of measuring data obviously involves collecting the factor being measured.
E.g. 100m sprint - time to complete measured by timing gates. So on the face of it timing gates are good measures of speed/validity
Can you describe content validity?
- The instrument adequately covers the subject of measurement
- Instrument covers all aspects of relevance from population
- Can you describe what construct validity is?
- What are the two ways of assessing constructive validity?
- Constructive validity assesses to what extent an instrument accurately measures the hypothetical constructs
1. Convergent - Scores on an instrument that are similar to those on another measure
2. Discriminant - Scores on an instrument are different from a measurement on a instrument that is measuring another score