Module6 Flashcards
Types of reliability
Test/retest reliability
alternate forms reliability
split-half reliability
interrater reliability
Values correlation coefficients
+-.70-1.00 Strong
+-.30-.69 Modarate
+-.00-.29 None to weak
kinds of validity
content validity
face validity
criterion validity
construct validity
Alternate-forms Reliability
A reliability coefficient determined by assessing the degree of relationship between scores on two equivalent tests.
Construct Validity
The degree to which a measuring instrument accurately measures a theoretic construct or trait that it is designed to measure.
Content Validity
The extent to which a measuring instrument covers a representative sample of the domain of behaviors to be measured.
Correlation Coefficient
A measure of the degree of relationship between two sets of scores. It can vary between ?1.00 and +1.00.
Criterion Validity
The extent to which a measuring instrument accurately predicts behavior or ability in a given area.
Face Validity
The extent to which a measuring instrument appears valid on its surface level.
Interrater Reliability
A reliability coefficient that assesses the agreement of observations made by two or more raters or judges.
Negative Correlation
An inverse relationship between two variables in which an increase in one variable is related to a decrease in the other, and vice versa.
Positive Correlation
A relationship between two variables in which the variables move in the same direction an increase in one is related to an increase in the other and a decrease in one is related to a decrease in the other.
Reliability
An indication of the consistency of a measuring instrument.
Split-half Reliability
A reliability coefficient determined by correlating scores on one half of a measure with scores on the other half of the measure.
Test-retest Reliability
A reliability coefficient determined by assessing the degree of relationship between scores on the same test, administered on two different occasions.
Validity
A measure of the truthfulness of a measuring instrument. It indicates whether the instrument measures what it claims to measure.
Geobserveerde scores
ware score + error-score
Ware score
gemiddelde over onafhankelijke afnamen met dezelfde test.
Error-score
toevallige afwijking tussen de ware score en de geobserveerde score.
Conceptuele formule: Betrouwbaarheid
Betrouwbaarheid = ware score / geobserveerde score = ware score / (ware score + error-score)
Cronbach’s alfa (α)
een schatting van de betrouwbaarheid die is afgeleid van de onderlinge samenhang tussen de items.
Vuistregel Cronbach’s alfa (α):
o < . 60 slecht (de items mogen niet gecombineerd worden tot 1 score)
o .60 – .80 redelijk (de items mogen wel gecombineerd worden tot 1 score)
o >.80 goed.
20
BETROUWBAARHEID EN VALIDITEIT:
Kwantitatief onderzoek
betrouwbaarheid is een noodzakelijke voorwaarde voor validiteit, maar geen voldoende voorwaarde. > Een betrouwbare meting is geen garantie voor een valide conclusie.
BETROUWBAARHEID EN VALIDITEIT:
Kwalitatief onderzoek
vaak (maar dus niet altijd) uitgevoerd door onderzoekers met een andere wetenschapsfilosofische opvatting dan onderzoekers die meer kwantitatief onderzoek doen. > Niet per se dezelfde opvattingen over het belang van betrouwbaarheid en validiteit en over de relatie tussen die twee begrippen.