Type 1 And 2 Errors Flashcards
What is a type one error (two marks)?
When the research that has used a lenient P value. The research I think the results are significant when they actually due to chance/error. So they wrongfully accept the alternative/experimental hypothesis wrongly reject the null.
What is meant by a Type II error (two marks)?
When the researcher has used a stringent P value. They think that their results are not significant (due to chance/error) when they could be significant. So they wrongfully accept the null hypothesis and wrongly reject the experimental/alternate.
What is the difference between type one and two errors?
In a type one error the null hypothesis is rejected when it is true whereas in a Type II error the null hypothesis is accepted when it is false.
Why do psychologists use the 5% significance level?
It strikes a balance between the risk of making a type one and two error. It is a conventional significance level.
What does P < 0.10 mean?
The probability that something is due to chance/error is less than 10%
Is P < 0.10% lenient of stringent?
Lenient
Is P < 0.10% Likely to be a type one or Type II error?
Type one error, wrongfully accepting the experimental hypothesis and rejecting the null
What does P < 0.05% mean?
The probability that something is due to chance/error is less than 5%
What is the conventional P value and why?
5% significance is a Universally accepted P value strikes a balance between making a type one error and Type II error. It’s a conventional significance level.
What is P < 0.01?
The probability that something is due to chance/error is less than 1%
Is P < 0.01 too lenient or stringent?
Stringent.
Is P < 0.01 More likely to be a type one or Type II error?
Type II error. Wrongfully accepting the null hypothesis and rejecting the experimental.
What do you need to compare the calculated/observe value to to check for a type one error?
A critical value from a more stringent P value. If results are still significant than the researcher has not made a type one error. If results are now not significant than there is chance of a type one error.
To check for Type II error what do you need to compare the calculated/observed value to?
The critical value from a more lenient P value. If the results and still not significant than the researcher has not made a Type II error. If the results and now significant, then there is a chance of a Type II error.
Once you have worked out whether there is a type one or two error and results are now significant what must you comment on?
The percentage of confidence you now have in your results.