U11 Flashcards
Measurement
involves rules for assigning numeric values to qualities of objects to designate the quantity of the attribute
Errors of Measurement
Even the best instruments have a certain amount of error
Calculating Error
Obtained Score = True Score+/- Error
Most common factors contributing to Error
situational contaminants Response set biases Transitory personal factors Administration Variations Item Sampling
Reliability
consistency of instrument to measure attributes accurately. 3 Parts:
Stability, Reliability Coefficient and Test-Retest reliability
Stability
of a measure is the extent to which the same scores are obtained when the instrument is used with the same people on separate occasions. Assessed through Test-retest reliability procedures
Reliability coefficient
a numeric index of a measure’s reliability. Range from 0.00 to 1.00
Closer to 1.00 the more reliable (stable) the measuring instrument
Test-retest reliability
the further apart the tests are, the more likely the reliability is to decline.
Internal Consistency
for scales that involve summary items, ideally scales are measures of the same critical attribute. Internal consistency reliability is = to the extent that all its subparts measure the same characteristic
Split-half technique
scores are separated (positive vs negative), these two scores are used to generate a reliability coefficient. If both are really measuring the same attribute the correlation will be higher
Cronbach alpha (Coefficient alpha)
gives estimate of split half correlations for all possible ways of dividing measure into 2 halves. Indexes ranges from 0.00 to 1.00 with closer to 1.00 the more internally consistent the measure.
Equivalence
primarily with observational instruments. determines the consistency or equivalence of the instrument by different observers or raters. Can be assessed with the Interrupter Reliability.. estimated by having 2+ observers make simultaneous, independent observations and calculating the differences.
Interpretation of Reliability Coefficients
- hypotheses not supported about be d/t unreliability of measuring tool
- a low reliability prevents an adequate testing of research hypothesis
- reliability estimates via different procedures for same instrument are NOT identical
- sample heterogeneity increases reliability coefficient. (instruments are designed to measure differences, also sample homogeneity = decreased reliability coefficient)
- longer instruments (more items) tend to have higher reliability than shorter instruments
Validity
degree to which an instrument measures what it is supposed to be measuring
*can have high reliability of instrument with no evidence of validity
(high reliability does not = high validity)
*low reliability IS evidence of low validity
(low reliability = low validity)
Validity coefficient - score greater than or equal to .70 desirable (range .00-1.00)
Face validity
wether the instrument looks as though it is measuring the appropriate construct
Content Validity
adequate coverage of content area being measured
- crucial in tests of knowledge
- based on judgement
Criterion Validity
researchers seek to establish a relationship btw scores on an instrument and some external criterion (if scores correspond the instrument is said to be valid)
Construct validity
challenging, what construct is the instrument actually measuring?
Interpretation of validity
not an all or nothing characteristic of an instrument.
Sensitivity and specificity
Sensitivity is ability of an instrument to correctly identify a case, that is, to correctly screen in or diagnose a condition
Specificity instruments ability to correctly idtenify concusses, that is, to correctly screen out those without the condition.
Assessment of qualitative data
Gold Standard criteria for qualitative researchers: to establish trustworthiness of Qualitative date you need Dependability, Credibility, Confirmability and Transferability
Credibility
refers to confidence in the truth of data and interpretations of them
To Demonstrate Credibility:
1- prolonged engagement
2 - persistent observation
“If prolonged engagement provides scope, persistent observation provides depth”
Triangulation
can increase credibility *to overcome the intrinsic bias that comes from single-method, single-observer and single-theory studies. more complete and contextualized portrait of the phenomenon under study. 4 Types of Triangulation: Data Source Investigator Theory Method
Dependability
data stability over time and over conditions
There can be no credibility in the absence of dependability
Confirmability
objectivity or neutrality of the data. Congruence btw 2+ ppl about the datas accuracy, relevance, or meaning
Transferability
extent to which findings from data can be transferred to other settings and is thus similar to the concept generalizability.
Critiquing data quality
Qual/Quant:
- can I trust the data?
- does the data accurately reflect the true state of the phenomenon?
Quant:
- Reliability and Validity of the measures
- be wary when no info on data quality, or suggest unfavourable reliability/validity
- also wary when hypotheses not confirmed
Qual:
*high alert to info on data quality when only a single researcher collects, analyzes, and interprets all of the data.