hypothesis Testing and type I errors Flashcards

1
Q

Type I Error

A

Type I error occurs when we reject a true null hypothesis, indicating that we found an effect or difference when there isn’t one in reality.

Also known as a false positive, Type I errors can lead to incorrect conclusions and false discoveries in statistical hypothesis testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Type II Error

A

Type II error occurs when we fail to reject a false null hypothesis, indicating that we missed detecting an effect or difference that actually exists.
Also known as a false negative, Type II errors can result in the failure to identify true relationships or effects, leading to missed opportunities for discovery or intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Multiple Pairwise Comparisons

A

Statistical analyses conducted after an ANOVA or other omnibus test to identify which specific groups differ significantly from each other.
Helps to pinpoint where significant differences lie among multiple groups when the omnibus test indicates a significant overall effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Risk of Type I Errors with Multiple Pairwise Comparisons

A

Conducting multiple comparisons increases the chance of making at least one Type I error (false positive).
Correcting for multiple comparisons helps control the overall familywise error rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Bonferroni Correction

A

Adjusts the significance level (alpha) to maintain an overall alpha level across all comparisons.
Divide the original alpha level by the number of comparisons.

If the original alpha level is 0.05 and there are 10 pairwise comparisons, the Bonferroni-corrected alpha level would be 0.05 / 10 = 0.005 for each individual comparison.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why does multiple tests increase Type I, what can we do?

A

Performing multiple tests inflates the type I error rate, as each can yield its own false positive; Bonferroni correction mitigates this at the cost of power, potentially causing more type II errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Power?

A

Power is the probability of making a correct decision (to reject the null hypothesis) when the null hypothesis is false. Power is the probability that a test of significance will pick up on an effect that is present.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does 80% power mean in statistics?

A

Power is usually set at 80%. This means that if there are true effects to be found in 100 different studies with 80% power, only 80 out of 100 statistical tests will actually detect them. If you don’t ensure sufficient power, your study may not be able to detect a true effect at all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a good amount of power statistics?

A

Ideally, minimum power of a study required is 80%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Is p-value the same as power?

A

Significance (p-value) is the probability that we reject the null hypothesis while it is true. Power is the probability of rejecting the null hypothesis while it is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What sample size do we need for 80% power

A

To have 80% power to detect an effect size, it would be sufficient to have a total sample size of n = (5.6/0.5)2 = 126, or n/2 = 63 in each group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly