ANOVA, MANOVA, MANCOVA Flashcards
T-test
This test compares the means of two groups. The goal is to determine if they are statistically different from one another. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the analysis for the posttest-only two-group randomized experimental design
MANOVA
compares the multivariate means of multiple groups. If this test is significant, then it is possible to do an ANOVA test for each DV.
Characteristics of MANOVA
Has more than one DV
Uses an omnibus F-test: significant omnibus justifies running ANOVA separately for each DV after identifying significant DVs, can run Scheffe and Bon Ferroni to understand which categories are differing
The IV is categorical
Identifies significant DVs
Assumptions of MANOVA
Normality
Linearity
homogeneity of regression
reliability of covariates
ANOVA
Can be used for each DV after MANOVA F-test is significant.
IV is discrete (categorical), DV is continuous
Compares means between more than 2 groups
Can be used with nested data.
Assumptions of ANOVA
Normality
Homogeneity of variance
ANCOVA
Extension of ANOVA
Main effects and interactions of IVs are assessed after DV scores are adjusted for differences associated with 1 or more covariates
Used in experimental studies: remove predictable noise by assigning those variables as covariates
Can be used in non-experimental situations if Ss cannot be assigned to treatments-removes diffs between Ss on CVs, then analyzes diffs left due to the effects of the treatment
Assumptions of ANCOVA
CVs correlated with each other but not DV CVs independent of treatment Normality HOV Reliability of CVs Linearity between pairs of CVs & DV
Chi-square
Both variables are categorical.
A Chi-squared test is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true.
Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem.
A chi-squared test can then be used to reject the hypothesis that the data are independent.
The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories.
Test for independence, on the other hand, examines whether the associations between two variables in a
single population are independent. In this case, the comparison would be again the null (i.e., that the variables are independent).