Ch. 5 - Measurement Concepts Flashcards
What is the primary criteria that researchers use to evaluate the quality of their own and others’ research?
Reliability and validity
Reliability
- Refers to the CONSISTENCY or stability of a measure of behaviour
- Increasing the reliability of a measure reduces uncertainty or error associated with that measure
True score
-Is the person’s real score on the variable
Measurement error
-The degree to which a measurement deviates from the true score value
Pearson product-moment correlation coefficient
- Symbolized as “r”
- The Pearson correlation coefficient can range from 0.00 to +1.00 and 0.00 to -1.00.
- A correlation of 0 tells us that the two variables are not related at all
- The closer a correlation is to either +1 or -1, the stronger is the relationship
- The positive and negative signs provide information about the direction of the relationship
Test-retest reliability
- Is assessed by giving many people the same measure twice
- A reliability coefficient determined by the correlation between scores on a measure given at one time with scores on the same measure given at a later time
Alternate forms reliability
- Is sometimes used to avoid artificially high correlation coefficients if people taking the test a second time, remember it from the first time
- Alternate forms reliability involves administering two different forms of the same test to the same people at two points in time
Internal consistency reliability
- Assesses how well a certain set of items relate to each other
- One common indicator of internal consistency is a value called Cronbach’s alpha
Cronbach’s alpha
-An indicator of internal consistency reliability assessed by examining the average correlation of each item (question) in a measure with every other question
Interrater reliability
- Is the extent to which raters agree in their observations
- A commonly used indicator of interrater reliability is called Cohen’s kappa
Construct validity
-The degree to which a measurement device accurately measures the theoretical construct it is designed to measure
Indicators of construct validity?
- Face validity
- Content validity
- Predictive validity
- Concurrent validity
- Convergent validity
- Discriminant validity
Face validity
- The content of the measure appears to reflect the construct being measured
- Face validity is not sufficient to conclude that a measure has construct validity
Content validity
- Is based on comparing the content of the measure with the theoretical definition of the construct
- The content of the measure captures all the necessary aspects of the construct and nothing more
Predictive validity
-The construct validity is assessed by examining the ability of the measure to predict a future behaviour