One-Way ANOVA Assumption Repeated Measure Post Hoc Effect Sizes Flashcards
What is variance?
is the average distance each value in the sample is from the mean
In ANOVA, what is variance?
Identifying the relative location of several group means.
The mean of each sample is each data point and we are looking at the variance of that instead of the variance of the data points
How is ANOVA similar to regression in terms of means?
ANOVA talking about means of groups - regression not about group mean variance.
What types of variance does ANOVA take into account?
Between and within group variance
When seeing if an ANOVA is statistically significant, what does statistical significance mean?
Tells us not just if means from more than one group are different BUT if the differences in those means is GREATER than you would expect by chance.
How are ANOVAs and T tests different?
Because in an ANOVA we are wanting to see if means between MORE THAN TWO GROUPS are statistically significant. T test limited comparing only two groups
What can you notice regarding research questions for ANOVAs?
The DV is continuous
There are three groups at least - three or four different groups you would put people into and then compare these groups which would each have their own means and variance
What is ANOVA?
Like a t-test in that compare means between gorups.
What is the null hypothesis for an ANOVA?
That the means for all groups are identical
If we reject the null hypothesis for an ANOVA, we are saying…
The information we have is that at least one group mean IS different, but we don’t know which one yet - have to figure it out.
A statistically significant ANOVA indicates
A. Not all means are identical
B. At least one group mean differs from the rest
C. More than one group means may be different
D. All of the Above
D
True or false: In an ANOVA we can tell which group has a different mean through reading the output?
NO. Must do a post hoc test, because otherwise we just know at least one of the groups is not like the other
Why can we not compare each group mean (use multiple t-tests) with a T test to look at multiple group means?
Because we can’t look at more than one IV at a time
It would also inflate our Type 1 error rate (chance of rejecting null hypothesis when we should not)
When assumptions are met, ANOVA is more powerful than t-tests for two or more groups
WHy is it not a good idea to run multiple tests without correcting for multiple comparisons?
Because the more tests you run without good reason, bigger change you’ll get a type 1 error.
True or false: if you run thousands of tests with alpha level .05, eventually you would get a significant result by chance
True
What is the family-wise alpha error rate?
The chance of having at LEAST one false positive across a series of comparisons or tests
It is dependent on the deciison wise error rate AND the number of comparisons
What is the Decision wise error rate?
The probability of a false positive within a SINGLE comparison or test.
The family wise error rate is dependent on the decision wise error rate AND the number of comparisons
What is the difference between the decision wise error rate and the family wise errror rate?
The family wise refers to having at least one false positive across series of tests, whereas decision wise is having a false positive within a SINGLE test
Why is ANOVA more powerful than T test for more than two groups when assumptions are met?
Because it uses pooled variance estimates across all groups, and you have a larger sample size.
True or false: ANOVA allows us to evaluate all means in single hypothesis test, while keep alpha level at 0.5
True
If we ran three tests - so three comparison groups, family wise error rate according to math would be .14. ANOVA evaluates relative location of all group means at once so we can just do single test to keep error rate 5%. What is the only downside of this test?q
Only tells us if groups means are different, NOT which one or how many are different
When you see:
One”way” ANOVA,
or Two”way” ANOVA,
what does the One or Two refer to?
The number of FACTORS (aka independent variables) in the test.
So one way is a single factor ANOVA. It does not refer to the number of groups whose means we are comparing, because that can be any number of groups. But instead, anything above a single factor ANOVA is looking at how more than ONE IV affects the DV.
You have this research question:
• Which treatment is most effective for decreasing depressive symptoms? Cognitive-behaviour therapy, meditation, the combination of these two treatments?
Is this one way or two way anova?
ONE way because the only GROUP VARIABLE (IV/FACTOR) is treatment type.
If we were also interested in how gender affected a decrease in depressive symptoms, and how gender and treatment type interacted, gender would be a second factor in two way anova.
• Which treatment is most effective for decreasing depressive symptoms in men and women? Cognitive-behaviour therapy, meditation, the combination of these two treatments?
Two way ANOVA - two factors - treatment type/gender
You have this research question:
Which treatment is most effective for decreasing depressive symptoms in men and women? Cognitive-behaviour therapy, meditation, the combination of these two treatments?
What analysis would you run?
Two way ANOVA - two factors - treatment type/gender
In ANOVA, what is the unexplained variance due to chance called?
Residual Sum of Squares
In ANOVA, what is the explained variance - how much variability accounted for by the independent variable?
Model Sum of Squares, also known as variability between group means
Variability between group means in anova is also known as..
Model Sum of Squares
In ANOVA what is the total variability between scores called
Total Sum of Squares
What is the between groups variability also known as?
Model Sum of Squares
What does this equation tell us?
SS(T) = SS(M) + SS(R)
The TOTAL sum of squares is equal to the variability between group means PLUS the residual variability (everything we couldn’t explain)
To determine whether treatment group means are significantly different, we compare _____
amount of varibaility explained by the model SS(M) COMPARED TO residual variability SS(R).
So EXPLAINED compared to UNEXPLAINED variance.
And for effect to be significant, MORE variability has to be explained by the model than by residuals otherwise its a bad model
How do we determine whether the model explains more variability than the residuals?
The F Ratio
What does the F ratio tell us?
Whether the model explains more variability than the residuals. If it’s a signficiant effect
The F Ratio equation is
F = MS(M) divided by MS (R).
So similar to the total sum of squares with SS(M) and SS(R), BUT.. how are they different?
The F ratio is looking at the MEAN sum of squares instead of just the sum of squares.
Specifically, how do you go from the MS(M) , mean sum of squares for the model, and MS(R), mean sum of squares for residuals,
FROM the sum of squares (M) for the model variance for between group means (SS M) and residual variability (SS R) as seen in equation for total sum of squares?
The MS(M) is sum of squares for the model, divided by degrees of freedom for the model (k-1)
The MS(R) is sum of squares for residuals, divided by degrees of freedom residuals (n - k)
If a numerator is smalled than a denominator, we can conclude that the treatment has…
no effect because the F will be smaller than 1 therefore the test is non significant
An F score smaller than one indicates
Non significance