Selecting Measures and Non-Experimental Methods I: Observational and Survey Research Flashcards
every measure we obtain consists of:
“true score” and error
error is due to:
- bias (a systematic deviation that is the result of confounds)
- random error (a result of nuisance)
what are three sources of measurement error?
- experimenter
- participant
- observer/scorer
experimenter error examples?
random error, and bias error (experimenter characteristics and expectancies)
solutions for experimenter errors?
standardize testing conditions, standardize appearance of experimenter/replicate experiment with different experimenters, standardize coding schemes/automated recording equipment/single blind research
examples of participant error
carelessness and distraction (contributes to nuisance) and participant bias
solutions for participant errors
set clear task instructions with emphasis on accuracy, include a manipulation check
what are some causes for participant bias
Demand characteristics and good participant effect together can cause a “pact of ignorance” - invalidates results
demand characteristics
features of an experimenter that seem to inadvertently cause participants to act in a particular way
good participant effect
tendency for participants to behave as they perceive the researcher wants them to behave
how to control for demand characteristics
conduct double-blind research (removes confounds, but not nuisance), can use deception but this could inadvertently cause demand characteristics
response set
when the context affects the way a participant responds, can be a factor of the experimental setting or the questions that are asked (social desirability could influence answers)
response set contributes to:
response bias (participant bias)
to control for yea/nay-sayers:
include both agree and disagree terms with switched implications, randomize question presentation (reverse-coding), care review of response set, use of pilot tests
observer error is only present in:
behavioural studies
examples of observer error
random observer error, observer/scorer bias-confirmatory bias
how to control for observer error:
eliminate human observer (muse mechanical measures), limit observer subjectivity (standardized coding schemes), make observer “blind”
construct validity
the extent to which the manipulation or measure actually represents the claimed construct
- establishing reliability (criteria for validity)
test-retest reliability, inter-rater reliability, internal consistency (research finding must be repeatable and consistent)
what is internal consistency
extent to which responses to items that propose to measure the same unidimensional construct are similar, test by Split-half correlation, Cronbach’s Alpha, improved by adding items/questions
- content validity (criteria for validity)
extent to which a measure covers all aspects of a construct
- convergent validity (criteria for validity)
extent to which a measure correlates with other indicators of the same construct
- discriminant validity (criteria for validity)
extent to which your measure is distinguishable from other constructs (both related and unrelated)
sensitivity
ability of measures to DETECT effects
how to achieve sensitivity in measurement?
use measures with score variance (avoid all or nothing, add scale points, use pilot test) and avoid restriction of range (to avoid floor/ceiling effects)
what is the main identifying point for descriptive research methods
cannot infer causation
what are types of descriptive studies?
(archival), case studies, naturalistic observation, participant observation, clinical perspective
archival studies
using data that has been previously recorded to answer a new question
case study
collecting detailed info about the behaviour of a single person (used to study one person over an extended period of time, in depth examination is qualitative)
con’s of case study
generalization based on N of 1, precludes cause and effect
naturalistic observation
observing behaviour in the real world unobtrusively
participant observation
when the observer imbeds themselves within the group being studied
clinical perspective
descriptive approach aimed at understanding and correcting a particular behavioural problem
how is clinical perspective different from participant observation?
- the client chooses the clinician
- clinicians cannot be unobtrusive/passive, they have been asked to participate in the situation
- participant observer’s goal is understanding, whereas the clinician’s goal is helping
reactivity
when knowledge one is being observed affects his/her behaviour
observational techniques are high on:
external validity (can generalize easily to study) but low on internal validity (cannot be sure DV is caused by “IV” - because of lower control of study)
what are some more issues with observational techniques?
low on objectivity, cannot make cause/effect statement
descriptive surveys
seeks to determine what % of the population have particular characteristics, beliefs, or behaviours. Where one question tests for one quality. The goal is to get it to be as representative of the population as possible.
Self-selection bias
those who have experienced something they were unhappy with are more likely to respond
sampling and self-selection bias in surveys/questionnaires decrease:
representativeness of the sample and thus compromises generalizability
analytic surveys
seeks to determine the relevant variables and how they are related, establishes correlation ranging from -1 to 1 (magnitude of 1 = strength of relationship)
surveys/questionnaires tend to examine:
an opinion or belief
tests/inventories are more ….. compared to surveys/questionnaires
objective, it is a specific assessment of trait, quality or scale.
ex. achievement tests (BAR, candidacy) or aptitude tests (MCAT, SAT, LSAT)
single strata approach
select from one subgroup of a population
cross-sectional approach
compares multiple subgroups at the same time
longitudinal research
looks at one group over an extended period of time
the group in a longitudinal study is called a:
cohort (may display certain traits that makes generalization more difficult)