Midterm 2 Flashcards
Interrater Reliability
Is there consistency from rater to rater
- cohen’s kappa
Alternative (parallel form) reliability
Is there consistency between forms of the test?
- correlation coefficient
Inter-item (internal consistency) reliability
Consistency between individual items and the total score?
- cronbach’s alpha
Test-retest reliability
Consistency between scores on tests given in two separate occasions?
- correlation coefficient
Representative reliability
Is the measure consistent across sub populations or groups of people?
- no stat test
Face validity
do the items on the scale appear to measure what you say they do?
panel of experts establish it
Content validity
Is the full content covered?
Criterion validity
How does the measure relate to an already known standard?
- Concurrent: correlate with existing measure of the construct
- Predictive: correlate with other traits that would be associated with what you are trying to measure
Construct validity
How well does the operational definition assess the underlying theoretical construct?
- Discriminative: does the measure differentiate between groups we’d expect to score differently?
- Convergent: do multiple measures of the same construct hang together or operate in consistent ways?
Probability Sample
RANDOM generalizable
Non-Probability Sample
Not random, not generalizable
simple random sample
totally random, computer generator
systematic sample
every Kth element, random starting point
stratified sample
population put into subgroups and then randomly selected from each group
cluster sample
researcher obtains a list of clusters not individuals, clusters are randomly sampled
convenience sample
whoever is willing to
purposive sampling
sample is selected from predetermined criteria
snowball sample
ask for referrals