Exam 2 Flashcards
Defining and Measuring Variables, Validity, Reliability, The Descriptive Research Strategy, and Chi-Square.
Validity
The degree to which an instrument measures and what it claims to measure.
Face Validity
The measure “looks like it makes sense” on the surface.
Concurrent Validity
A new measure correlates with a previous measure of the same construct.
Construct Validity
The extent to which a measuring instrument accurately measures the theoretical construct or trait that it is designed to measure.
Criterion-related Validity (Predictive)
The extent to which a measuring instrument accurately predicts current or future performance.
- The measure of “Does my study actually represent what is happening in the population?”
- Ex: Does aggressiveness correlate with the number of times a child hits his doll?
Internal Validity
The extent to which a set of research findings provides information about causality.
-Laboratory experiments
Eternal Validity (Generalizability)
The extent to which a set of research findings provides an accurate descriptions of what typically happens in the real world.
-Passive, observational studies
Reliability
Consistency of a measure.
-Reliability=True Score/[True Score + Error Score]
Measurement Error
An error of measurement due to transient states (mood, level of fatigue, etc.), stable characteristics (attitude), context/environment, characteristics of measure (ambiguous questions), coding errors.
Internal Consistency/Interitem Reliability
The degree to which all of the specific items or observations in a multiple-item measure behave the same way (equivalency of items).
Interrater Reliablity
The degree to which different judges independently agree upon an observations or judgment (agreement between raters).
Average Inter-Item Correlations
1) Compute correlation between each pair of items
2) Compute average of all correlations
Item-Total Correlation
-Each item on a scale should correlate with the sum of the other items (>.30)
Test/Retest
The degree to which an item or a scale correlates positively with itself over time (>.70).
-Ex: SAT scores
Parallel Forms
Using alternate forms of the testing instrument and correlating performance on the two different forms.
- *Recommend a two-week gap between tests.
- Make tests similar