Defining and Measuring Variables: Chapter 3 Flashcards
why measure variables
- comparison
- classification
- decision-making
- diagnosis
- prediction
- program evaluation
variables types
- directly observable
height, weight - inferred states
emotions
construct
- presumed unobserved internal mechanism that account for externally observed behaviour
- abstract to concrete
- example:
anxiety, self esteem, motivation, aggression, intelligence
operational definitions
- precise description of what you will measure, how and when
- defines operations that allow us to confidently link the unobservable construct with the observable behaviour
- transforms abstract into concrete variable
- they are clear, precise and
measurement procedures of operational definitions must meet this criteria:
- reliablity
- validity
types of validity
- face validity
- predictive validity
- concurrent validity
- construct validity
- convergent and divergent validity
- internal validity
- external validity
face validity
extent to which the measurement appears at first glance to be a plausible measure of the variable
predictive validity
- strength of the relationship between 2 variables
- can you use one to predict the other?
concurrent validity
- the more studies use the same measurement and obtain similar results, the more valid it becomes
- old vs new
construct validity
- extent to which scores obtained from a measure behave exactly the same as the variable itself
- grows with the accumulation of evidence from the studies using the same measurement with similar results
convergent and divergent validity
- using multiple measures in study
- some scores will converge, others will diverge
internal validity
- can say the changes in X have caused observed changes in Y
- depends on appropriate control of other variables (confounding factors)
external validity
- extent to which your results can generalize to other settings and populations
- if the effect remains, then the test has good external validity
reliability
- consistency of a measure over repeated applications under the same conditions
- measurement is always varying
- measured score = true score + error
types of reliability
- test-retest reliability
- inter-rater reliability
- split-half reliability
test-retest reliability
repeat same measurement and calculate correlation between scores
inter-rater reliability
- compare the scores from the two raters and calculate correlation
- two people should rate similarly in a given behavioural study
split-half reliability
- usually for clinical scales and questionnaires
- take half of scores from half the items and correlate them with the scores
measurement error
observed scores may not be a reflection of the variable/construct being measured
sources of error
- participant
- instrument and apparatus
- testing
participant errors
- mood
- motivation
- fatigue
- health
- memory
- practice
- knowledge
- ability
instrumental/apparatus errors
- sensitivity
- length
- vocabulary
- clarity of instructions
- appropriateness
- intrusiveness
testing errors
- comfort
- presence of others
- distractions
reducing error through standardization
- participants
- test protocol
- environment
- scoring procedures
score guidelines
- clear and easy to follow
- complex
- experience required
- individual differences
standardizing participants
inclusion and exclusion criteria
- age
- gender
- education level
- health status
- ethnicity
standardizing test protocols
- must remain consistent
- instructions to participants
- treatment of participants
- administration of tests and measures
- order of tests and measures
standardizing the environment
most conducive to testing and is repeatable
- time of the day
- day of the week
- time of year
- temperature
- noise level
- accessibility
standardizing scoring procedures
- marking criteria should be as clear and precise as possible
- allow participants some practice prior to recording scores
reliability coefficient
- ratio of true score variance to observed score variance
r^2= s^2 true/s^2 observed
acceptable score > 80%
reflects the degree to which the measurement = free of error variance
types of measurement
- qualitative
- quantitative
quantitative measurement
- nominal
- ordinal
- interval
- ratio scale
interval data
- variables have order and magnitude
- have equal intervals
- no true zero point
ratio scale data
- variables have order and magnitude
- have equal intervals
- have a true zero point
- examples: distance, length, reaction time
modalities of measurement
- self-report measures
- physiological measures
- behavioural measures
- multiple measures
self-report measures
- direct, but subjective
- social desirability
physiological measures
- objective but invasive
- costly and time consuming
behavioural measures
- interpretation
- clusters better
multiple measures
- increase in confidence in validity of measurement
- can require complex statistical procedures
- interpretation can be challenging
range effect
measurement is not sensitive enough to detect a difference
ceiling effect
- clustering of scores at high end scale
- little possibility of increase in scores
floor effect
- clustering of scores at low end of scale
- little possibility of decrease in scores
experimenter bias includes…
- artifact
- bias
- limiting
artifact
non-natural feature introduced to the study accidentally
bias
measurement influenced by experimenter’s expectations regarding the outcome of the study
limiting
- standardize or automate the experiment
- single blind vs double blind
single-blind
researcher is not aware of the expected results
double-blind
neither the researcher nor the participant know the expected results
demand characteristics
any features of the experiment that:
- suggests the purpose and hypothesis of the study
- influence the participants to respond or behave in a certain way
can lead to reactivity:
- participants modify their natural behaviour knowing they are still in a study
participant reactivity
- good subject role
- negativistic subject role
- apprehensive subject role
- faithful subject role
good subject role
support the hypothesis
negativistic subject role
acts contrary to the hypothesis (sabotage)
apprehensive subject role
act and answer in socially desirable manner (fake good)
theories
statements about the mechanisms underlying a particular behavior
constructs can influence external behaviour
- external stimulus
- construct
- external behaviour
issues with operational definitions
- may leave out internal elements of a construct, as some symptoms are cognitive or affective
- does not include extra components, such as hearing, developed vocabulary, etc