Hypothesis Testing- ERRORS Flashcards
If p </= 0.05, would our results be likely or unlikely if the null hypothesis were true?
Unlikely
If p </= 0.05 (& our results unlikely if the null hypothesis were true), would we reject or fail to reject the null hypothesis?
We would reject the null hypothesis
If p > 0.05, would our results be likely or unlikely if the null hypothesis were true?
Likely
If p > 0.05 (& our results unlikely if the null hypothesis were true), would we reject or fail to reject the null hypothesis?
We would fail to reject the null hypothesis
Is decision-making based on p-values fallible or infallible?
Fallible
What is the consequence of decision-making based on p-values being fallible?
That there are 2 types of errors that we can make when deciding to reject/ fail to reject the null hypothesis.
If the reality is that our null hypothesis (based on a population) is true, & our decision (inference based on a sample) is to fail to reject our null hypothesis, would there be an error or would we be correct?
We would be correct
If the reality is that our null hypothesis (based on a population) is false, & our decision (inference based on a sample) is to fail to reject our null hypothesis, would there be an error or would we be correct?
There would be an error
If the reality is that our null hypothesis (based on a population) is true, & our decision (inference based on a sample) is to reject our null hypothesis, would there be an error or would we be correct?
There would be an error
If the reality is that our null hypothesis (based on a population) is false, & our decision (inference based on a sample) is to reject our null hypothesis, would there be an error or would we be correct?
We would be correct
What is 1 way in which we can express the null hypothesis?
As H0.
What is a type I error?
If the reality were that our null hypothesis (based on a population) is true, & our decision (inference based on a sample) was to reject our null hypothesis (we find effects that don’t exist).
What is a type II error?
If the reality were that our null hypothesis (based on a population) is false, & our decision (inference based on a sample) was to fail to reject our null hypothesis (we miss effects that do exist).
How is it possible to never confuse type I & II errors again?
Simply by remembering the “Boy Who Cried Wolf” analogy (& replacing the word “wolf” with the word “effect”)
In which order did the boy who cried wolf cause type I & type II errors?
He caused type I errors first, then a type II error.
What happened in the “Boy Who Cried Wolf” analogy?
First, everyone believed that there was a wolf when there wasn’t, and then everyone believed there was no wolf when there was.
What is another name for a type I error?
A false positive
How could the “Boy Who Cried Wolf” analogy be applied to type I errors?
The boy would cry wolf, causing the villagers to incorrectly reject the null hypothesis of there not being a wolf.
How could the “Boy Who Cried Wolf” analogy be applied to type II errors?
There would be a wolf, however, the villagers would incorrectly accept the null hypothesis of there not being a wolf.
In what way is guarding against errors a balancing act?
If we set our statistical significance threshold very low to reduce the chance of type I errors (e.g. p<= 5x10^-8), this increases our chance of making a type II error and missing real effects.
What is the 5% significance threshold a tradeoff between in relation to errors?
Type I & type II errors.
When can we reject the null hypothesis & say we have a significant effect?
If our probability (p-value) of the effect occurring by chance is less than 0.05.
What is the chance of finding an effect at 0.05 if the null hypothesis is true?
Less than 1 in 20.
When do we fail to reject the null hypothesis?
If our probability (p-value) of the effect occurring by chance is greater than 0.05.