Power And Effect Size Flashcards
With F-ratios that exceed F-critical we…
- reject the null hypothesis
- independent variable(s) influence(s) the dependent variable.
- Statistically significant effect.
•When a finding does not exceed alpha level (p <0.05) we…
- fail to reject the null hypothesis:
- Ho=all means are equal implies no evidence of an effect of the treatment
- No evidence of a statistical difference.
“no statistical difference” does not…
- prove the null hypothesis.
- We simply do not have evidence to reject it.
- A failure to find a significant effect does not necessarily mean the means are equal.
So it is difficult to have confidence in the null hypothesis:
Perhaps an effect exists, but our data is too noisy to demonstrate it.
Sometimes we will incorrectly fail to reject the null hypothesis –
- a type II error.
* There really is an effect but we did not find it
Statistical power is the probability of…
detecting a real effect
power is given by:
1-
where is the probability of making a type II error
•In other words, it is the probability of not making a type II error
Power is your ability to find a …
difference when a real difference exists.
The power of a study is determined by three factors:
- Alpha level.
- Sample size.
- Effect size=
- Association between DV and IV
- Separation of Means relative to error variance.
Power and alpha
By making alpha less strict, we can…
•increase power.
(e.g. p < 0.05 instead of 0.01)
However, we increase the chance of a Type I error.
Low N’s have very little…
Power
Power saturates with many…
Subjects
Power and sample size
One of the most useful aspects of power analysis is the estimation of the
sample size required for a particular study
•Too small an effect size and an effect may be missed
•Too large an effect size too expensive a study
Different formulae/tables for calculating sample size are required according to
Experimental design
Power and effect size
•As the separation between two means increases the power…
Also increases