T-Test (Comparing 2 means) & ANOVA (ch.9, 10, 11, 12 ) Flashcards
Dummy Variable
a way of recoding a categorical variable with more than two categories into a series of variables all of which are dichotomous and can take on values of only 0 or 1. There are seven basic steps to create such variables: (1) count the number of groups you want to recode and subtract 1; (2) create as many new variables as the value you calculated in step 1 (these are your dummy variables); (3) choose one of your groups as a baseline (i.e., a group against which all other groups should be compared, such as a control group); (4) assign that baseline group values of 0 for all of your dummy variables; (5) for your first dummy variable, assign the value 1 to the first group that you want to compare against the baseline group (assign all other groups 0 for this variable); (6) for the second dummy variable assign the value 1 to the second group that you want to compare against the baseline group (assign all other groups 0 for this variable); (7) repeat this process until you run out of dummy variables.
Grand mean
the mean of an entire set of observations.
independent t-test
- The independent t-test compares two means, when those means have come from different groups of entities.
o SPSS: Look at the column labelled Levene’s Test for Equality of Variance. If the Sig. value is less than .05 then the assumption of homogeneity of variance has been broken and you should look at the row in the table labelled Equal variances not assumed. If the Sig. value of Levene’s test is bigger than .05 then you should look at the row in the table labelled Equal variances assumed.
o SPSS: Look at the table labelled Bootstrap for Independent Samples Test to get a robust confidence interval for the difference between means.
paired-samples t-tes
- The paired-samples t-test compares two means, when those means have come from the same entities (two different conditions experienced by the same participants).
Eta squared (η²)
an effect size measure that is the ratio of the model sum of squares to the total sum of squares. So, in essence, the coefficient of determination by another name. It doesn’t have an awful lot going for it: not only is it biased, but it typically measures the overall effect of an ANOVA, and effect sizes are more easily interpreted when they reflect specific comparisons (e.g., the difference between two means).
Analysis of variance
a statistical procedure that uses the F-ratio to test the overall fit of a linear model. In experimental research this linear model tends to be defined in terms of group means, and the resulting ANOVA is therefore an overall test of whether group means differ.
Analysis of covariance (ANCOVA)
· Analysis of covariance (ANCOVA) compares several means adjusted for the effect of one or more other variables (called covariates); for example, if you have several experimental conditions and want to adjust for the age of the participants.
· Before the analysis check that the independent variable(s) and covariate(s) are independent. You can do this using ANOVA or a t-test to check that levels of the covariate do not differ significantly across groups.
· As with ANOVA, if you have generated specific hypotheses before the experiment use planned comparisons, but if you don’t have specific hypotheses use post hoc tests.
MANOVA
This can be thought of as ANOVA when there are several DVs.
MANOVA is used to test the difference between groups across several dependent variables/outcomes
simultaneously.
Box’s test looks at the assumption of equal covariance matrices. This test can be ignored when sample sizes are equal because when they are some MANOVA test statistics are robust to violations of this assumption. If group sizes differ this test should be inspected. If the value of Sig. is less than .001 then the results of the analysis should not be trusted