Defining and Measuring Variables: Chapter 3 Flashcards
why measure variables
- comparison
- classification
- decision-making
- diagnosis
- prediction
- program evaluation
variables types
- directly observable
height, weight - inferred states
emotions
construct
- presumed unobserved internal mechanism that account for externally observed behaviour
- abstract to concrete
- example:
anxiety, self esteem, motivation, aggression, intelligence
operational definitions
- precise description of what you will measure, how and when
- defines operations that allow us to confidently link the unobservable construct with the observable behaviour
- transforms abstract into concrete variable
- they are clear, precise and
measurement procedures of operational definitions must meet this criteria:
- reliablity
- validity
types of validity
- face validity
- predictive validity
- concurrent validity
- construct validity
- convergent and divergent validity
- internal validity
- external validity
face validity
extent to which the measurement appears at first glance to be a plausible measure of the variable
predictive validity
- strength of the relationship between 2 variables
- can you use one to predict the other?
concurrent validity
- the more studies use the same measurement and obtain similar results, the more valid it becomes
- old vs new
construct validity
- extent to which scores obtained from a measure behave exactly the same as the variable itself
- grows with the accumulation of evidence from the studies using the same measurement with similar results
convergent and divergent validity
- using multiple measures in study
- some scores will converge, others will diverge
internal validity
- can say the changes in X have caused observed changes in Y
- depends on appropriate control of other variables (confounding factors)
external validity
- extent to which your results can generalize to other settings and populations
- if the effect remains, then the test has good external validity
reliability
- consistency of a measure over repeated applications under the same conditions
- measurement is always varying
- measured score = true score + error
types of reliability
- test-retest reliability
- inter-rater reliability
- split-half reliability
test-retest reliability
repeat same measurement and calculate correlation between scores
inter-rater reliability
- compare the scores from the two raters and calculate correlation
- two people should rate similarly in a given behavioural study
split-half reliability
- usually for clinical scales and questionnaires
- take half of scores from half the items and correlate them with the scores
measurement error
observed scores may not be a reflection of the variable/construct being measured
sources of error
- participant
- instrument and apparatus
- testing
participant errors
- mood
- motivation
- fatigue
- health
- memory
- practice
- knowledge
- ability
instrumental/apparatus errors
- sensitivity
- length
- vocabulary
- clarity of instructions
- appropriateness
- intrusiveness