Chapter 2 Flashcards
α-level
the probability of making a Type I error (usually this value is .05).
Alternative hypothesis
the prediction that there will be an effect (i.e., that your experimental manipulation will have some effect or that certain variables will relate to each other).
β-level
the probability of making a Type II error (Cohen, 1992, suggests a maximum value of .2).
Bonferroni correction
a correction applied to the α-level to control the overall Type I error rate when multiple significance tests are carried out. Each test conducted should use a criterion of significance of the α-level (normally .05) divided by the number of tests conducted. This is a simple but effective correction, but tends to be too strict when lots of tests are performed.
Central limit theorem
this theorem states that when samples are large (above about 30) the sampling distribution will take the shape of a normal distribution regardless of the shape of the population from which the sample was drawn. For small samples the t-distribution better approximates the shape of the sampling distribution. We also know from this theorem that the standard deviation of the sampling distribution (i.e., the standard error of the sample mean) will be equal to the standard deviation of the sample(s) divided by the square root of the sample size (N).
Cohen’s d
An effect size that expressed the difference between two means in standard deviation units. In general it can be estimated using the formula above.
Confidence interval
for a given statistic calculated for a sample of observations (e.g., the mean), the confidence interval is a range of values around that statistic that are believed to contain, with a certain probability (e.g., 95%), the true value of that statistic (i.e., the population value).
Degrees of freedom
an impossible thing to define in a few pages, let alone a few lines. Essentially it is the number of ‘entities’ that are free to vary when estimating some kind of statistical parameter. In a more practical sense, it has a bearing on significance tests for many commonly used test statistics (such as the F-ratio, t-test, chi-square statistic) and determines the exact form of the probability distribution for these test statistics. The explanation involving soccer players in Chapter 2 is far more interesting…
Deviance
the difference between the observed value of a variable and the value of that variable predicted by a statistical model.
Effect size
an objective and (usually) standardized measure of the magnitude of an observed effect. Measures include Cohen’s d, Glass’s g and Pearson’s correlations coefficient, r.
Experimental hypothesis
synonym for alternative hypothesis.
Experimentwise error rate
the probability of making a Type I error in an experiment involving one or more statistical comparisons when the null hypothesis is true in each case.
Familywise error rate
the probability of making a Type I error in any family of tests when the null hypothesis is true in each case. The ‘family of tests’ can be loosely defined as a set of tests conducted on the same data set and addressing the same empirical question.
Fit
how sexually attractive you find a statistical test. Alternatively, it’s the degree to which a statistical model is an accurate representation of some observed data. (Incidentally, it’s just plain wrong to find statistical tests sexually attractive.)
Linear model
a model that is based upon a straight line.