ERMS Exam 3 Flashcards
ANOVA
Analysis of variance
Variance
measure of statistical dispersion, how far from the expected value its values typically are
why do we use
ANOVA
used to evaluate mean difference between 2+ groups
Multigroup research
contains more than 2 groups
Factor
the independent (or quasi-independent) variable that indicates the groups that are being compared
Factorial design
study design that has 2+ factors (aka, more than 1 IV)
Levels
conditions or values that make up a factor
Alpha
tells us how often we should expect to mistakenly reject a null hypothesis
ANOVA Null hypothesis
There is no difference, anywhere, between ANY groups.
ANOVA Alternative Hypothesis
there is a difference, somewhere, between at least 2 groups.
F-Statistic
Divides the variance (differences) we see between our sample means by the variance we would expect if there was no effect ???
f-statisitic formula??
Between groups Variance + error / Within groups Variance
ms between/ ms within?
Between-treatment variance
the variance between groups, systematic treatment affects, but unsystematic factors.
is the denominator of f-ratio
Within treatments Variance
Random, unsystematic factors, Denominator of f-ratio.
Within-groups Variance
Variability that naturally occurs within a level/condition, Comes from people having naturally different scores within a group
One-way ANOVA
Uses 1 categorical IV with 3+ levels
Factorial ANOVA
Uses 2+ categorical IV with 2+ groups
K
number of groups
N
Number of participants
Post Hoc tests
Follow-up tests done to determine exactly which mean differences are significant and which are not.
Planned comparisons:
when researcher make plans ahead of time: Plan which pairs of groups\levels they intend to compare
how do Bonferroni Correction tests work
Series of T-test for every possible pair of groups, Is going to correct for family wise error inflation by dividing alpha by the number of tests giving you a new alpha.
alpha/# of tests = new alpha
Tukey’s HSD
Honestly significant difference, makes adjustment to deal with inflated family wise error.
Determines the minimum difference needed to have statistical significance at that alpha level
Confidence interval
A range of scores that extends equally in both directions from an estimate that are considered plausible based on the data
(the scores that are liekly 95%).
what do we see with Interactions?
mean differences among levels of combined factors.
How multiple factors affect a DV together.
Main effects
mean differences (vairance) among levels of one factor
How a factor affects a DV independently
how each independent variable effect the dependent variable.
Why is anova one-tailed
Because f-ratios are computed from two variances, they are always positive numbers. variability can’t be negative
how do you calculate F
f = MSbetween/MSWithin
what are the differences between one way and factorial anova
hypothesis testing in factorial H0 and H1
H0: no difference (all three must be true to fail to reject the null)
- Part 1: the means of all levels in factor 1 are ALL
equal
- Part 2: the means of all levels in Factor 2 are ALL
equal
- Part 3: The effect of on factor on the DV does not depend on the level of another factor
H1: sig dif (only one must be true to reject the null)
- At least one of the means is different in factor 1
- At least one of the means is different in factor 2
- The effect of one factor on the DV/criteria does not depend on the level of another factor.
Synergism interaction
the effects of one factor gets stronger based on the other
Antagonistic interaction
the lines cross. Ideal interaction
Why should you use ANOVA instead of several t tests to evaluate mean differences when an experiment consists of three or more treatment conditions?
A ANOVA can test more than 2 independent variables at once, instead of W vs N, C vs N, AOVA can do W vs N vs C. it also saves time and rescores, and can uncover non-linear relationships
Each test we run has a alpha, if we run too many tests, we run the risk of returning a type 1 error. This is called family wise error. ANOVA solves this by looking at all mean differences all at once.
Within-groups/treatments degrees of freedom formula
Df(within) = N- K
Between groups/treatments degrees of freedom formual
Df(between) = K - 1
two factor study
a experimental deisgn in which data is collected for all possible combinations of all levels of the two factors of interest
alpha is always
.05
Means squared: what is it and how do we calculate it?
measurement of variabilty, ss/df = MS
why do we use post- hoc
when we want to see the which levels are significant and in order to not get type 1 error it helps with family wise error
when do we use a post hoc
we use a post hoc when there is a different somewhere between the groups. when the critical f is greater than .05
what questions do a factorial anova answer
Is there a main effect of just the 1rst factor (IV/predictor)
Is there a main effect of just the 2nd factor (IV/Predictor)
Do the effects of the IVS or predictor on the DV depend on each other
Do the effects of the IVS on the DV depend on each other
t-statistic
f-statsitic denominator
Ms within + error
One-way Null hypothesis
there is no difference, anywhere, between any of the groups.
moderator
which factor you think has the larger effect.
what does a simple effects tests do
Takes one of the factors and splits it up into each of its levels
2. “Freezes” one level of the factor and looks at the “simple effect” of the other factor’s levels on the DV
ANOVA scources of vairance
Different in scores in between and within groups
anova error variance
within variance (f-stat denominator)
coneceptually describe Error
error is the factoring in of variance.
why does partitioning cariance change for factorial anovas? what is the change
changes becuase added more vairables (factors)
anova advantages of t-ests
can have a control and compare multiple groups
Goal of ANOVA
to find the difference in variance between 2+ groups
Family wise error rate
When we run multiple t-test’s the likely-hood of type 1 error increases and the alpha adds onto each other
one-way anova alternative hypothesis
there is a difference somewhere between any of the groups.
why does anova use variance instead of mean difference
inferences about means are made by analyzing variance
Why should you use ANOVA instead of several t tests to evaluate mean differences when an experiment consists of three or more treatment conditions?
With 3 or more groups, when you run multiple t-test the rick of running as type 1 error increases. Anova looks at variance so we’re able to look at all the means at the same time. The ANOVA performs all of the tests simultaneously with a single, fixed level for α
f-statistic numberator
ms Between groups
why doesnt ANOVA have directional hypotheses
ANOVA doesnt have directional hypothesis becuase they’re measuring variance. you can either have no variance or some variance. so either 0-1
when do we reject the null
P < A
pairwise comparisons
in a post hoc test when Tests you compare two individual means at a time (t-tests).