Introduction Flashcards
Scientific method, hypotheses, types of errors, power, effect size
Define scientific method broadly
the principles of research and experimentation, and the philosophic basis of these principles.
Diagram of the scientific method from Bowley
Insert drawing here.
Central process of the scientific method
Starts with a question/problem. Construction of a hypothesis, performing a series of experiments designed to test (not prove) the hypothesis, examining the data, and then drawing a conclusion (yes or no the data supported the hypothesis).
Null vs. Alternative Hypothesis
Null (H0): Thing 1 = Thing 2
Alternative (HA): Thing 1 ≠ Thing 2
The hypothesis is either nullified or not nullified by the statistic test.
Two possible types of error from the null hypothesis being accepted or rejected.
If the null is rejected → Type I error may be made
If the null is accepted → Type II error may be made.
Type I error (α)
The Type I error, denoted by the symbol α, is the error rate of the test. It is the probability of rejected H0 when it is true.
Define p-value
the likelihood of observing a test statistic by random sampling that is as large as or larger than that obtained from the study. The p-value is the Type I error rate if the null hypothesis is rejected based on the test statistic.
Type II error (β)
The probability of accepting H0 when it is false. It is denoted by the symbol β. It relates to the power of the test (1-β) which is the probability of rejecting H0 when it is false.
What does Type II error rate depend on?
The characteristics of the population, the precision, of the experiment, and the specific hypothesis under the test.
General Factors that affect the level of β:
- affected by the magnitude of the difference between the two estimates; the closer they are, the higher will be the Type II error.
- Dependent upon the standard error of the estimate. The calculation of a standard error involves a division of n-1, where n is the sample size. Can decrease error variance through technical enhancements, improved experimental design, and appropriate statistical analyses. Decreasing experimental errors will decrease the probability of making a Type II error.
- The greater the sample size the larger will be the divisor (n-1), and the smaller the standard error. A greater sample size should reduce Type II error rate.
Define power of the test
the power of the test (1-β) is the probability of rejecting H0 when it is false. As the power of a statistical test increases, the probability of a Type II error decreases.
What four values are required to determine the power of the test?
- the magnitude of the difference
- the standard error of the estimates
- the sample size
- the Type I error rate of the test
What two factors can alter power?
variance and sample size (researchers can not control the means, etc.)
Diagram of the relationship between decisions and errors associated with the acceptance/rejection of the null hypothesis
Insert drawing here.
Define the effect size
is an index which quantifies the magnitude of a treatment difference
p-value and effect size relationship
The p-value conveys the statistical significance, the effect size conveys the strength of the differences or association
Define STUDENT’S t-test
is a test of the difference between two estimates.
If they are the same (under H0), the expected difference is zero.
If they differ, then their absolute difference will be greater than zero.
Two-tailed t-test vs. one-tailed t-test
A two-tailed t-test is used to compare if two estimates differ.
A one-tailed t-test is used to compare whether one estimate is larger than the other.
When is t-test useful?
When you have only 2 treatments.