Exam 2 Flashcards
Internal Validity
the independent variable is what caused the change in the dependent variable
Selection effects
sampling effects, possible intentional grouping
Order effects
order of the independent variable impacts behaviors on subsequent orders
Design confounds
another variable systematically varies and is actually what causes the change to the dependent variable
Maturation
participants changed or matured over time
comparison groups as potential solution
history threat
Maybe something happened in culture/society/environment to cause the change.
comparison groups over same amount of time as potential solution
regression threat
outlier; this one time was really different than the norm, and with time the behavior ‘regresses’ to the meam
comparison group can show how it decreases or increases in comparison to the treatment group
if both groups start in the same place, it is not regression.
attrition threat
participants left the study; bad if said participants added variance to your results
potential solution: re-analyze pre-test without the participants who dropped
testing threats
A type of order effect where there is a change in participants as a result of experiencing the DV (the test) more than once.
instrumentation threat
your measurement changed over time,
potential solution: use post-test design only, with comparison group
Preventing testing threat
One way to prevent testing threats is not to use a pretest (posttest-only design). Another way is to use alternative forms of the test at pretest and posttest.
Having a comparison group is also helpful. You can rule out testing threats if both groups take the pretest and the posttest, but the treatment group exhibits a larger change than the comparison group.
observer bias
bias caused by researchers’ expectations influencing how they interpret the results. (expecting to see participants improve and they are rated as improving)
potential solution: : blind rater code behavior(bring someone in to code behavior who does not know the purpose of the study); masked design(participants know what condition they are in but the research doesn’t)
demand characteristics
participants figure out what the research is about and change their behavior accordingly
potential solutions: double-blind study
placebo effects
Participants expect a change and manifest it.
solution: a special comparison group is used that is receiving the placebo therapy or placebo medication, but neither the people working with the participants nor the participants know who is in which group (double-blind placebo control study)
why might a study result in null effects?
it might be that the IV really didn’t affect the DV. Other times when there’s a null result, it’s because the study wasn’t designed or conducted properly, so the IV actually did cause the DV, but some obscuring factor got in the way of the researchers detecting the difference.
There are two types of obscuring factors: (1) There might not have been enough difference between groups, or (2) there might have been too much variability within groups. Let’s look at each of these types in detail.
Null effects: not enough between group differences
within groups variability may be too high and obscured the differences; weak manipulations, insensitive measures, ceiling and floor effects
how to solve not enough between group differences
Weak manipulations: Use a manipulation check; If needed, rerun the study with a stronger manipulation.
Insensitive measures: Use a refined scale.
Ceiling and floor effects: Use DV measures that allow variability; Use a manipulation check for IV.
ceiling effects
everybody gets the highest score; questions are too easy
floor effects
everybody gets the lowest scores; questions are too hard
Manipulation check
a second DV included in a study to make sure the IV manipulation worked
Why could there be too much within groups variability?
measurement error, situation noise, individual differences; Having too much noise can get in the way of detecting between-group differences.
how to solve for individual differences
- Change the design: use a within-groups design instead of an independent-groups design When you do this, then each person receives both levels of the IV, and individual differences are controlled for. It’s easier to see the effect of the IV when individual differences aren’t obscuring between-groups differences. You can also use a matched-groups design. Pairs of participants are matched on an individual differences variable, and it’s easier to see the effects of the IV.
- Add more participants: if it’s not feasible to change the design to a within-groups or matched-groups design, then try adding more participants. This will lessen the effect that any one participant has on the group average.
Power
the likelihood that a study will yield a statistically significant result when the IV really has an effect.
Studies with a lot of power are more likely to detect true differences
There really is no difference? Not enough variability between levels?
your IV simply does not have a causal effect on your DV
How to solve for measurement error
any factor that can inflate or deflate a person’s true score on the DV.The goal is to keep measurement error as small as possible.
Use reliable, precise measurements: measurement errors are reduced when researchers use measurement tools that are reliable (internal, interrater, and test/retest) and that are valid (i.e., have good construct validity).
Measure more instances: if researcher can’t find a measurement tool that’s reliable and valid, then the best alternative is to measure a larger sample of participants. Random errors will cancel each other out with more people in the sample.
how to solve for situation noise
external distractions of any kind that obscure between-groups differences and cause variability within groups.
Example: This includes smells, sights, and sounds that might distract participants and increase within-groups variability; it adds unsystematic variability to each group situation by controlling the surroundings of an experiment that might affect the DV
solve by testing participants in a quiet room with no outside odors, and so on
One way Anova
used to determine whether there are any statistically significant differences between the means of three or more independent (unrelated) groups
compares the means between the groups you are interested in and determines whether any of those means are statistically significantly different from each other.
t-test versus one way anova
if there are two groups, use a t-test. if there are three or more groups, use a one-way anova
Within Samples Variance in one way ANOVA
the variance within one condition
Between samples variance
variance between ALL conditions
three main assumptions that must be met in order for your ANOVA test to be valid and meaningful
The dependent variable is normally distributed in each group that is being compared in the one-way ANOVA
There is homogeneity of variances
There is independence of observations
The dependent variable is normally distributed in each group that is being compared in the one-way ANOVA
So, for example, if we were comparing three groups (e.g., amateur, semi-professional and professional rugby players) on their leg strength, their leg strength values (dependent variable) would have to be normally distributed for the amateur group of players, normally distributed for the semi-professionals and normally distributed for the professional players.
There is a homogeneity of variances
This means that the population variances in each group are equal. If you use SPSS Statistics, Levene’s Test for Homogeneity of Variances is included in the output when you run a one-way ANOVA in SPSS Statistics
there is an independence of observations
This is mostly a study design issue and, as such, you will need to determine whether you believe it is possible that your observations are not independent based on your study design (e.g., group work/families/etc).
Explain why ANOVAs are needed (why not just run multiple t-tests?)
Every time you conduct a t-test there is a chance that you will make a Type I error (false positive). This error is usually 5% (if your alpha is .05 – it is 1% if your alpha is .01).
- By running two t-tests on the same data you will have increased your chance of “making a mistake” to 10% (essentially doubling your chances of making a Type I error.)
total variance ANOVA
How much an individual score differs from the grand mean
-Grand mean (Gm) = overall mean of all scores
Within group variance
How much an individual score differs from
its own group mean
Between group variance
How much a group mean differs from the
grand mean
Understand the impact of within group variance on an F- Value
the f statistic is the comparison of the variance between the groups (your levels of the IV) from the variance within the groups.
Repeated measures ANOVA
a one- way ANOVA used to determine whether there are any statistically significant differences between different levels of your independent variable across one group of participants
when to use RM Anova
(1) when participants are measured multiple times to see changes to an intervention or something over time; or
(2) when participants are subjected to more than one condition/trial and the response to each of these conditions wants to be compared.
Between Treatments/ Between Conditions Variance(RM ANOVA)
the variance between different levels of the IV
Within Treatments/ Within Groups Variance
variance within each level of the IV
advantage of a repeated measures ANOVA
whereas within-group variability (SSw) expresses the error variability (SSerror) in a between-subjects ANOVA, a repeated measures ANOVA can further partition this error term, reducing its size.
- This has the effect of increasing the value of the F-statistic due to the reduction of the denominator and leading to an increase in the power of the test to detect significant differences between means
Between-Subjects Variability
variability
that comes individual participant score
variability across treatments
individual differences