Dr Mac's Final's List Flashcards
ANOVA- Analysis of Variance- CH 11
A statistical test that checks if the means for several groups are equal. Used as a way to avoid the increasing probability of a type I error that comes with running multiple t-tests.
Bonferroni correction- CH 11
A method used to correct for type I errors that can arise from multiple comparisons.
F-statistic- CH. 11
A test used with normally distributed populations to determine if the means of said populations are equal.
Omega squared- CH. 11
The effect size for one-way ANOVA results.
Orthogonal planned contrasts- CH. 11
A type of a priori test; comparisons that are planned before analysis of data has begun because certain results are expected. Orthogonal planned contrasts help reduce type I error inflation that comes from multiple comparisons.
Post hoc tests- CH. 11
Comparisons made to data after analysis to determine which means are contributing the greatest amount of variance.
T-test- CH. 11
A method for comparing two means.
Categorical data- CH. 14
Data made up of categorical variables, which are variables measured at the nominal or ordinal level.
Chi-square test- CH. 14
A non parametric test used to determine whether an actual distribution of categorical data values differ from the expected distribution.
Contingency tables- CH. 14
A table used to display the frequency distributions of variables, often used to study the relationship between two or more categorical variables.
Crosstab analysis- CH. 14
Using a contingency table to study the relationship(s) between variables and focus in on the most significant relationships.
Fisher’s exact test- CH. 14
A method used to test the relationship between categorical values in instances where the sample size is too small to use the chi-square test.
Odds ratio- CH. 14
A descriptive statistic used in categorical data analysis that measures effect size (the strength of the association between two binary data values).
Phi and Cramer’s V- CH. 14
Statistics that report the strength of an association between two categorical variables.
Relative risk- CH. 14
A descriptive statistic that measures the probability of an event occurring if exposed to a specific risk factor.
Analysis of covariance- CH. 12
Also known as ANCOVA; a combination of analysis of variance (ANOVA) and regression analysis that checks if the population means for a dependent variable are equal across an independent variable, while controlling for the presence of covariates.
Assumption of sphericity- CH. 12
An assumption of repeated-measures ANOVA that the difference scores of paired levels of the repeated measures factor have equal variance.
Box’s M test- CH. 12
A method used to test the homogeneity of covariance matrices.
Covariate- CH. 12
A variable that influences the dependent variable, but is not the independent variable (i.e., not the variable of interest). Also known as a covariable.
Factorial analysis of variance- CH. 12
Data analysis that studies the effects of two or more independent variables (factors) on the dependent variable.
Multivariate- CH. 12
When a design examines two or more dependent variables.
Multivariate analysis of covariance- MANCOVA- CH. 12
An extension of analysis of covariance that is used in cases where there is one or more dependent variables and there is a covariate(s) that needs to be controlled.
Multivariate analysis of variance- MANOVA- CH. 12
An extension of analysis of variance that examines group differences on a combination of multiple dependent variables.
Simple effect analysis- CH. 12
Statistical analysis that examines the effect of one variable at every level of the other variable, to confirm if the effect is significant at each level.
Sphericity-assumed statistics- CH. 12
Repeated-measures design statistics provided if the assumptions of sphericity is not violated.
Univariate- CH. 12
When a design examines a single dependent variable.
Multiple linear regression- CH. 10
An extension of simple linear regression, where two or more independent variables, either continuous or categorical, are jointly used to predict a single dependent variable.
Enter method- CH. 10
The default method in regression analysis, when all of independent variables are fitted into the regression model at the same time.
Goodness of fit- CH. 10
A measure of how well a model fits a set of observations.
Hierarchical method- CH. 10
A method in regression analysis that utilizes blocks of independent variables (chosen based on importance), added one at a time, to see if there is any change in the predictability.
Linear- CH. 10
Generally referring to the relationship of one variable to another, which resembles a line.
Linearity- CH. 10
A statistical term that is used to represent a mathematical relationship and graphically shown as a straight line.
Logistic regression- CH. 10
A type of regression analysis that predicts a group membership in a categorical dependent variable with independent variables, which are usually continuous but can be categorical as well; called binary logistic regression when the number of categories of the dependent variable is two and multinomial logistic regression if a dependent variable has more than two categories.
Method of least squares- CH. 10
An approach used in regression analysis to find the line that best fits the data with the fewest residuals.
Multicollinearity- CH. 10
A high correlation/relationship (over .85) between independent variables with one potentially being linearly predicted from the others (i.e., no added information explained).
Percentage of variance- CH. 10
Proportion of variation in a given data set explained by independent, mediating and/or moderating variables.
Regression model- CH. 10
The model created via regression analysis.