MODULE #5 Flashcards
refers to the accuracy and precision of research tool. When a measure is precise, then the reader has a level of confidence that differences between groups are not explained by differences in the way the trait was measure
RELIABILITY
Methods of testing reliability
a. Stability of measurement
b. Internal Consistency
C. Equivalence
this is the extent to which the scores are obtained when the instrument is used with the same samples on separate occasions. A stable research instrument is one that can be repeated over and over on the same research subject and will produce the same result
Stability of measurement
Test of stability
Test - Retest
Repeated observations
the repeated measurements over time using the same instrument on the same subjects is expected to produce the same results. This is used in interviews, examinations, and questionnaires.
Test - Retest
The measurement of the variable or trait is repeated over time, and the results at each measurements time are expected to be similar
Repeated observations
The instrument shows that all indicators or subparts measure the same characteristics or attributes of the variables.
Internal Consistency
Tests of internal Consistency
Split-Half Correlations
Cronbach’s alpha coefficient
Tests of internal Consistency
Split-Half Correlations
Cronbach’s alpha coefficient
scores on one half of a subject’s responses are compared to scores on the other half, if all items are consistently measuring the overall concept, then the scores on the two halves of the test should be highly correlated
Split-Half Correlations
a measure of internal consistency, that is, how closely related a set of items are as a group. It is useful device for establishing reliability in a highly structured quantitative data collection instrument.
Cronbach’s alpha coefficient
primarily concerns the degree to which two or more independent observers or coders agree about scoring. If there is a high level of agreement, then the assumption is that measurement errors have been minimized
Equivalence
Tests of Equivalence
- Alternate Form
- Inter Rater
two tests are developed based on the same content but the individual items are different. When these two tests are administered to subjects at the same time, the results can be compared
Alternate form
this is the method of testing for equivalence when the design calls for observation
and is used to determine whether two observes using the same instrument at the same time will obtain similar results
Inter-Rater