Assumptions (Fields Ch. 5) Flashcards
Bootstrap
a technique from which the sampling distribution of a statistic is estimated by taking repeated samples (with replacement) from the data set (in effect, treating the data as a population from which smaller samples are taken). The statistic of interest (e.g., the mean, or b coefficient) is calculated for each sample, from which the sampling distribution of the statistic is estimated. The standard error of the statistic is estimated as the standard deviation of the sampling distribution created from the bootstrap samples. From this, confidence intervals and significance tests can be computed.
Heterogeneity of variance
the opposite of homogeneity of variance. This term means that the variance of one variable varies (i.e., is different) across levels of another variable.
Heteroscedasticity
the opposite of homoscedasticity. This occurs when the residuals at each level of the predictor variables(s) have unequal variances. Put another way, at each point along any predictor variable, the spread of residuals is different.
Homogeneity of variance
the assumption that the variance of one variable is stable (i.e., relatively similar) at all levels of another variable.
Homoscedasticity
an assumption in regression analysis that the residuals at each level of the predictor variable(s) have similar variances. Put another way, at each point along any predictor variable, the spread of residuals should be fairly constant.
Independence
the assumption that one data point does not influence another. When data come from people, it basically means that the behaviour of one person does not influence the behaviour of another.
Kolmogorov-Smirnov test
one way to test is the data normal? a test of whether a distribution of scores is significantly different from a normal distribution. A significant value indicates a deviation from normality, but this test is notoriously affected by large samples in which small deviations from normality yield significant results.
Levene’s test
this tests the hypothesis that the variances in different groups are equal (i.e., the difference between the variances is zero). It basically does a one-way ANOVA on the deviations (i.e., the absolute value of the difference between each score and the mean of its group). A significant result indicates that the variances are significantly different - therefore, the assumption of homogeneity of variances has been violated. When samples sizes are large, small differences in group variances can produce a significant Levene’s test.
Mixed normal distribution
a normal-looking distribution that is contaminated by a small proportion of scores from a different distribution. These distributions are not normal and have too many scores in the tails (i.e., at the extremes). The effect of these heavy tails is to inflate the estimate of the population variance. This, in turn, makes significance tests lack power.
Outlier
an observation or observations very different from most others. Outliers bias statistics (e.g., the mean) and their standard errors and confidence intervals.
P-P plot
Short for a probability-probability plot. A graph plotting the cumulative probability of a variable against the cumulative probability of a particular distribution (often a normal distribution). Like a Q-Q plot, if values fall on the diagonal of the plot then the variable shares the same distribution as the one specified. Deviations from the diagonal show deviations from the distribution of interest.
Parametric test
a test that requires data from one of the large catalogue of distributions that statisticians have described. Normally this term is used for parametric tests based on the normal distribution, which require four basic assumptions that must be met for the test to be accurate: a normally distributed sampling distribution (see normal distribution), homogeneity of variance, interval or ratio data, and independence.
Q-Q plot
short for a quantile-quantile plot. A graph plotting the quantiles of a variable against the quantiles of a particular distribution (often a normal distribution). Like a P-P plot, if values fall on the diagonal of the plot then the variable shares the same distribution as the one specified. Deviations from the diagonal show deviations from the distribution of interest.
Robust test
a term applied to a family of procedures to estimate statistics that are reliable even when the normal assumptions of the statistic are not met.
Transformation
the process of applying a mathematical function to all observations in a data set, usually to correct some distributional abnormality such as skew or kurtosis.