Final Exam 2 Flashcards
notation system for factorials
of IV1 X # of IV2, multiply to find total number of conditions
main effect
overall effect of Independent Variable, can have as many main effects as independent variables
interaction
effect of one factor depends on the level of another factor. If one level of IV is higher in both conditions, no interaction occurred. Also can plot onto graph, parallel lines=no interaction
mixed factor design
at least one between subjects factor, at least one within subjects factor
PXE design
factorials with subject and manipulated variables, P=person, E=environmental
Pearson’s R
ranges from -1 to +1, 0=no correlation
correlations with scatterplots
Weak correlations are more spread out
Pearson’s R squared
helps determine variability
criterion variable
in regression analysis, variable being predicted
predictor variable
in regression analysis, variable doing predicting
how to solve problem of directionality
can use cross-lagged correlation, longitudinal design
third variable problem
what are correlations with other variables? Make chart
problems with correlational research
hard to establish cause
reasons to use correlational research
- make predictions
- when variables cannot be manipulated (subject)
- test/retest reliability and criterion validity of psychological tests
- assessing relationships between variables in personality and abnormal psych
- twin studies
bivariate analysis
correlational research, two variables
multivariate analysis
correlational research, more than two variables, only one criterion variable
factor analysis
examines all possible correlations among each of several scores, identifies clusters of intercorrelated scores
quasi-experimental research
research with non equivalent groups: examples=
- nonequivalent group factorial design
- P X E factorial design
- correlational design
- control group pre-test and post-test design
Quasi-experimental non-equivalent control group design
Experimental: 01 treatment 02
non-experimental: 01 no treatment 02
Example of control: Arizona in nightmare study, did not experience earthquakes
baseball coach study: control group=baseball coach from different league who was not trained
Possible confounds of quasi-experimental design
- history
- subject selection
- knowledge of participating
- ceiling/floor effects-too difficult/easy
- regression to mean-extreme scores at pretest move toward mean at posttest, looks like there was no effect in treatment
Best outcome in quasi-experimental pretest posttest design
experimental below control in prettest, end=experimental above control. This outcome rules out regression and ceiling/floor effects
Time-series design
measure at different moments, say 10 moments. Pretest at O1 and posttest at O8. Measure past posttest. Treatment happens at O5. Helps to evaluate longer term trends
Problems with time-series design
attrition, history (must rule out these confounds)
Time-series switching replication
Give treatment at different time to rule out effect of history
second dependent variable
measure a different variable, which should not be influenced by program. If both DVs change, there could be a different trend or extraneous variable influencing both variables.
Stages of program evaluation
- planning
- research
- informants/focus groups - formative evaluation
- -evaluating program while in process
- is program being implemented as planned? - summative evaluation
- program effectiveness
Program evaluation, failure to reject null hypothesis
can be useful in this case because new programs have to prove themselves. Might find program is not cost effective.
Naturalistic vs. participant observation
In participant observation, experimenter is involved in group being observed
-cult study (Festinger)
Problems with participant observation
- ethical issues
- reactivity of group, changing their behavior
Challenges with observation research
-absence of control, cannot prove hypothesis (can falsify)
-observer bias
-how to counteract: checklist, interobserver reliability,
time and event sampling
survey: types
- probability sampling
- representative sampling: sample reflects attributes of target population
sampling procedures
- random
- stratified: same strata as actual population
- cluster sampling: group of people all have some feature in common like living on the same floor or taking core classes
interviews: benefits and costs
benefits: comprehensive, can follow up
costs: cost, logistics, interview bias, representative samples
phone surveys
benefits: cost, efficiency
drawbacks: brief, low response rate
electronic survey
benefits: cost, efficiency
drawbacks: sampling issues
written surveys
plus: ease of scoring (vs. interviews)
drawbacks: cost, non-response rate, social desirability bias
creating an effective surveys
- open-ended questions when first starting
- balance favorable and unfavorable statements (avoids response acquiescence)
- “most important problem” questions
- use scales (Likert scale)
- moderate use of I don’t know alternative
- place demographic info at end of survey
- avoid ambiguity with a pilot study
- avoid biased and leading questions
- don’t ask for two things in one question
- “do you support or oppose”
Small N-design definition
- data reported one participant at a time
- small group or one individual studied
Why use small-N design
- practical reason: rare attribute, rare species
- issues with statistical summaries: grouping data can be misleading, poor individual-subject validity
Operant conditioning, steps of experimental analysis of behavior
Behaviors result from learning history Steps 1.Define behavior 2.Understand conditions leading to behavior 3.Understand reinforcing consequences
Applied behavior analysis process
1) Baseline A
2) Treatment
3) B
Withdrawal designs
ABA or ABAB is better
Benefits of case study
- level of detail
- can serve falsification
Weaknesses of case study
- limited control
- external validity-generalization
- faulty memory
Alternating treatment design
alternate treatments to see which is more effective example: AOC vs. no AOC in autistic girl
weaknesses of small N design
- external validity
- no statistical analysis
- interactive effects hard to test
- overreliance on rate or response
changing criterion design
Shaping behavior, criterion is changed until goal is reached, example: diet studies
Multiple baseline design
One treatment introduced at three different times, each baseline should improve after treatment is introduced, not before. Example: posting scores for each behavior with football players. This solves problem of ABAB design
Types of multiple baseline studies
- treatment introduced in different settings
- treatment introduced with different subjects, all have different baselines but same general behavior
- multiple behaviors, one treatment (like football player study)