Exam 2 Flashcards
Internal Validity
the independent variable is what caused the change in the dependent variable
Selection effects
sampling effects, possible intentional grouping
Order effects
order of the independent variable impacts behaviors on subsequent orders
Design confounds
another variable systematically varies and is actually what causes the change to the dependent variable
Maturation
participants changed or matured over time
comparison groups as potential solution
history threat
Maybe something happened in culture/society/environment to cause the change.
comparison groups over same amount of time as potential solution
regression threat
outlier; this one time was really different than the norm, and with time the behavior ‘regresses’ to the meam
comparison group can show how it decreases or increases in comparison to the treatment group
if both groups start in the same place, it is not regression.
attrition threat
participants left the study; bad if said participants added variance to your results
potential solution: re-analyze pre-test without the participants who dropped
testing threats
A type of order effect where there is a change in participants as a result of experiencing the DV (the test) more than once.
instrumentation threat
your measurement changed over time,
potential solution: use post-test design only, with comparison group
Preventing testing threat
One way to prevent testing threats is not to use a pretest (posttest-only design). Another way is to use alternative forms of the test at pretest and posttest.
Having a comparison group is also helpful. You can rule out testing threats if both groups take the pretest and the posttest, but the treatment group exhibits a larger change than the comparison group.
observer bias
bias caused by researchers’ expectations influencing how they interpret the results. (expecting to see participants improve and they are rated as improving)
potential solution: : blind rater code behavior(bring someone in to code behavior who does not know the purpose of the study); masked design(participants know what condition they are in but the research doesn’t)
demand characteristics
participants figure out what the research is about and change their behavior accordingly
potential solutions: double-blind study
placebo effects
Participants expect a change and manifest it.
solution: a special comparison group is used that is receiving the placebo therapy or placebo medication, but neither the people working with the participants nor the participants know who is in which group (double-blind placebo control study)
why might a study result in null effects?
it might be that the IV really didn’t affect the DV. Other times when there’s a null result, it’s because the study wasn’t designed or conducted properly, so the IV actually did cause the DV, but some obscuring factor got in the way of the researchers detecting the difference.
There are two types of obscuring factors: (1) There might not have been enough difference between groups, or (2) there might have been too much variability within groups. Let’s look at each of these types in detail.
Null effects: not enough between group differences
within groups variability may be too high and obscured the differences; weak manipulations, insensitive measures, ceiling and floor effects
how to solve not enough between group differences
Weak manipulations: Use a manipulation check; If needed, rerun the study with a stronger manipulation.
Insensitive measures: Use a refined scale.
Ceiling and floor effects: Use DV measures that allow variability; Use a manipulation check for IV.
ceiling effects
everybody gets the highest score; questions are too easy
floor effects
everybody gets the lowest scores; questions are too hard
Manipulation check
a second DV included in a study to make sure the IV manipulation worked
Why could there be too much within groups variability?
measurement error, situation noise, individual differences; Having too much noise can get in the way of detecting between-group differences.
how to solve for individual differences
- Change the design: use a within-groups design instead of an independent-groups design When you do this, then each person receives both levels of the IV, and individual differences are controlled for. It’s easier to see the effect of the IV when individual differences aren’t obscuring between-groups differences. You can also use a matched-groups design. Pairs of participants are matched on an individual differences variable, and it’s easier to see the effects of the IV.
- Add more participants: if it’s not feasible to change the design to a within-groups or matched-groups design, then try adding more participants. This will lessen the effect that any one participant has on the group average.
Power
the likelihood that a study will yield a statistically significant result when the IV really has an effect.
Studies with a lot of power are more likely to detect true differences
There really is no difference? Not enough variability between levels?
your IV simply does not have a causal effect on your DV