Experimental Design Flashcards
IV
- independent variable
- systematically varied/manipulated by researcher
- 2 comparison levels
- SITUATIONAL (ie. bystanders in helping beh study)
- TASK VARIABLES (ie. groups w/differing logical problems to solve)
- INSTRUCTIONAL (ie. groups instructed to memorise objects via images OR no instructions)
DV
- dependent variable
- outcome/measurement the effects upon which are observed by researcher
Manipulating IV
- non-systematic random allocation (ie. coin toss), ruling out systematic differences (ie. IQ, personality), to either:
- CONTROL GROUP/CONDITION: no manipulation
- EXPERIMENTAL GROUP/CONDITION: manipulation
- direct manipulation often impossible so…
- INDIRECT MANIPULATION: theoretical variables affected indirectly then checked via “manipulation check”
MIV: Indirect Manipulation (Example)
- IV = attribution for failure (internal/external)
- purposeful failure exposure (ie. test) followed by reflection of INTERNAL contribution (ie. character)
- other pps asked for reflection of EXTERNAL contribution (ie. lack of revision/luck)
- manipulation check (ie. scale of internal/external failure after reflection) to see if outcome is desired (ie. internal reflectors answer on internal scale)
- DV difference testing now possible
EV
- extraneous variable
- not of interest but influence DVs and threaten validity of findings via obscuring measurement of interest
- if uncontrolled, they may systematically influence DVs, leading to a confounding result
- not all are possible confounds (ie. as long as age is similar in a sample, it’s fine) BUT still negatively impact study:
IE. A study w/only 60y+ pps can have limited EXTERNAL VALIDITY; findings ungeneralisable beyond age (ie. kids)
CV
- confounding variable (“special EV”)
- unintended/accidental EVs associated w/IV, providing alternative result interpretation
- a systematic effect of EV on DV could be mistaken for effect of IV
Measuring DV
- refer to previous research
- PILOT STUDY to find:
- CEILING EFFECT; task too easy/overly high scores; disguised pp differences
- FLOOR EFFECT; task too hard/overly low scores; disguised pp differences
- SOLUTION; task moderate; found via pilot testing
MDV: DV Selection Issues (Example)
- DV selection can often be complicated by practical constraints
IE. Researcher looking at impact of alcohol consumption on roach fatalities: - IV manipulated via experimental groups consuming various alcohol quantities BUT unethical
- irresponsible/unethical/illegal for DV manipulation (aka. pps in driving accidents)
MDV: DV Selection Solutions (Example)
- high alcohol group consume legal limit but then DV (accidents) isn’t sufficiently sensitive to detect IV impact and still unethical
- DVs must be relevant to outcome but sensitive to IV, so…
- RELEVANCE-SENSITIVITY TRADE OFF looking at reaction times (critical determinant of safe driving)/VR driving simulator removing legal/ethical concerns
MDV: Relevance-Sensitivity Trade-Off
- the more sensitive DV is to IV changes, the less relevant it may be to IRL phenomena
- DV + IRL link may be strenuous, undermining EXTERNALL VALIDITY as “proxy measures” may not imitate target variable enough
QED: Variables
- include gender/age/cultural group/IQ/personality traits; unmanipulated/self-selected but can be basis of group allocation
- require additional considerations to avoid possible confounds
QED: Self-Selection Bias (Example)
- think putting yourself forward as a volunteer; automatically you have qualities which may affect the DV in the study
- WALD (1939); WWII; aircraft came home w/bullet holes; suggested reinforcement of areas must susceptible to damage; Wald said these were the planes RETURNING, so the other areas must be reinforced as that’s where grounded planes were being hit
Quasi-Experimental Design
- some studies compare variables; IV differences but are untouched
- causal inference unestablished
- CANNOT claim IV causes DV; only that IV groups differ when interacting w/DV
- think opposite of experimental designs.
QED VS ED
IE. Studying effect of self-esteem on altruistic behaviour.
ED) Manipulate self-esteem (ie. praise); random allocation to high/low esteem conditions, then altruism measured.
OUTCOME = can argue high self-esteem causes altruistic behaviour; opportunity of causality to explain relationship; possible contribution to relevant theory.
QED) Measure self-esteem; group based on scores (ie. high/low) then altruism measured.
OUTCOME = only claim that high self-esteem pps where more likely to behave altruistically than low self-esteem pps; no inferred causality so limited/impossible theory contribution.
QED: ED Interaction
- manipulated and QED variables often blend in studies
- BANDURA’S BOBO DOLL (1973); EV = type of exposure to violence; QEV = gender (self-selected)
Between-Participants Design
- ie. independent/non-repeated measures
- pps assigned 1 condition
- comparison between groups assigned randomly
- used when IV is self-selecting
BPD: Evaluation
POSITIVES: - each pp fresh/naive to hypothesis NEGATIVES: - more pps required - unsuspected differences between pps
BPD: Random Allocation
- every pp has same chance of being placed into any condition
- objective to spread important individual differences evenly across conditions
- EVALUATION: groups may not be equal; doesn’t necessarily target important differences; confounding variables
- equal groups achieved via:
BLOCK RANDOM ASSIGNMENT
STRATIFIED RANDOM ASSIGNMENT
MATCHING PROCEDUCRE + RANDOM ALLOCATION
BPD-RA: Blocked Random
- allocate random block p/pp, then each block to a group; groups are now equal
EVALUATION: - again, doesn’t necessarily target important characteristics
- confounding variables
BPD-RA: Stratified Random
- identifies important characteristics and groups pps accordingly
- allocate pps to applicable characteristic block and group/condition in turn
- each block includes all conditions in randomised order
- guarantees equal spread of pps p/group
- ensures each condition has pp before condition repetition
- required anticipation/measurement/accommodation of possible extraneous variables
EVALUATION: - again, doesn’t necessarily target important characteristics
BPD-RA: Matching Procedure
- must have reason for variable affecting DV
- get a score p/pp for matching variable in logical/accurate procedure
- arrange scores in ascension
- make 5 pairs, each w/adjacent scores
- randomly assign 1 pp p/condition for each pair
- variable is effectively controlled
EVALUATION: - impractical/impossible if sample is large
Within-Participants Design
- ie. repeated measures
- pps take part in 2/+ conditions
- comparison within 1 group
- used when conditions have briefs tests but extensive preparations (ie. psychophysiology) or pop is small
WPD: Evaluation
POSITIVES: - small samples - more data p/pp - reduced error variance NEGATIVES: - small samples - threats internal validity (ie. maturation of work) - order effects
WPD: Order Effects
- PRACTICE EFFECT: later performance improved via practice
- FATIGUE EFFECT: later performance reduced via fatigue
- CARRYOVER EFFECT: one condition sequence differs in results to another; experience of C1 affects C2/vice versa
WPD-OE: Counterbalancing
- using 1+ sequences of conditions
- either:
- COMPLETE: every possible sequence used at least once (ie. 6 sequences for 3 conditions)
- PARTIAL: uses subset of total sequences either sampling all possibilities OR randomising condition order p/pp (ie. Latin square)
Experimenter Bias
- errors in procedure due to researcher’s beliefs/behaviour/desires for results
CONTROLLED VIA: - automate procedure
- double-blind procedure (researcher/pps don’t know which condition is being tested)
Participant Bias
- errors in procedure due to participants’ unintentional/intentional influence
INCLUDES: - Hawthorne effect; pps change behaviour as they know they’re being studied
- demand characteristics (“please-u”/”screw-u” effects); “good” pp/”bad” pp gues hypothesis and try to help/destroy it respectively
- evaluation apprehension
- acquiescence effect
PB: Control
CONTROLLED VIA:
- DECEPTION: pps naive to study purpose; reduces DC/evaluation fear
- PLACEBO: pps falsely given “treatment”; distinguishes real effects