T test Flashcards
What does the T test assess?
Differences in the means of 2 data sets
Assumptions of the T test
- Sample drawn from normal population
- randomly selected
- homogeneity of variance
Why T test instead of Z test?
-Z test innacurate with small sample sizes
-T distributions are similar in shape to normal distributions but have thicker walls
What does the student’s T test account for?
Bias in the estimate of the SEm (standard error of the mean)
Types of T tests
- Single Sample
- Independent
- Dependent
what does a single sample T test compare?
mean against a known population
(actual mean difference between the sample and the population)
what does an independent t test compare?
samples independent from one another
Usually compares two different groups of people
What does a dependent t test compare?
Repeated measures
correlated samples (same subject tested twice)
What is the t -ratio?
-Signal to noise ratio
-signal = difference between means (numerator)
-Noise = standard error of the mean difference (denominator)
Important points about the T test
T test does not identify the causative factor
Purpose of a repteated measures t test
If subjects are tested more than once
i.e. pretest posttest
What is the Size of Effect, why is it relevant?
The magnitude of difference
Just because an effect is statistically significant does not necessarily mean that the effect is meaningful.
SSW
Sum of Squares within: how much variation is there within the sample
SSB
Difference between group mean and mean of means (between groups)
group mean - grand mean squared for each group.
how much variation there is between the samples
What is the F statistic
Test value for an ANOVA
What is the result if F stat is higher than F critical
Reject Ho accept Ha
What is alpha level
Probability of rejecting null hypothesis when its true (type I error)
What is the result if p is less than alpha?
Reject Ho
What does an ANOVA test?
the liklihood that the samples came form the same population
compares the means of two or more groups
What is the F - Ratio?
The ratio of the between group and within group variance
Why do an ANOVA instead of multipel T tests?
Increased risk of type 1 error
What is the F value if the null hypothesis is true?
F = 1
What is sheffe’s CI?
Most conservative Post Hoc Test
All possible comparisons - combo comparisons (more than just pairwise)
Example of scheffe’s analysis
average of several different treatments against control
Symbol for Scheffe’s CI?
I
what is “k”
number of groups
what is Falpha
F critical value from tables
HSD
Tukey’s Honestly Significant Difference
-calculates the minimum raw score mean difference that must be attained to declare statistical significance between any two groups.
Difference Between HSD and I
Tukey’s does not make all possible comparisons, only makes pairwise
n (lowercase)
size of groups
*this assumes they are equal
q (tukey’s)
value from studentized range distribution
MSe
Mean square error value from the ANOVA analysis
Eta Squared
(h squared) Same as R squared
The magnitude of effects of treatment
Eta Squared value of .52
52% of the total variance can be explained by the treatment effect
RM ANOVA
Repeated Measures ANOVA
within subjects design
same subjects measured two or more times
What is the T test analog of the RM ANOVA?
dependent t test
Assumptions of an RM ANOVA
Normality and Homogeneity
Sphericity
What does sphericity require?
the variance of the difference of all scores are equal
What happens if sphericity is violated?
inflate the type I error rate
Within Subjects Design
Repeated Measures ANOVA
Between Subjects Design
Single Factor ANOVA (one way)
Interindividual Variability
Variability between people in different groups
Intraindividual Variability
Variability in a persons scores
Sources of Variability
- Interindividual Variability
- Intraindividual Variability
- Variability between groups due to treatment effects
- Variability due to error (inter, intra, unexplained)
Variability due to error
unexplained variability
Result of eliminating interindividual variability
reduce mean square error in the denominator of the F ratio (like dependent t test)
SStotal
total sum of squares
SStime
variance due to differences between time periods
SSsubjects
Variance due to differences between subjects (t is the number of time periods)
F ratio for RM ANOVA
MStime / MSerror
Corrections to RM ANOVA
Greenhouse - Geiser (GG) adjustment
Huynh -Feldt (HF) adjustment
Greenhouse-Geiser Adjustment
WHEN VIOLATION IS SEVERE!
Adjustment degrees of freedom for RM ANOVA
- estimate of epsilon (sphericity)
correction for lack of sphericity
*assumes maximum violation
Huynh-Feldt Adjustment
WHEN VIOLATION OF SPHERICITY IS LESS SEVERE
Adjustment degrees of freedom for RM ANOVA
correction for violations of sphericity
Sphericity
assumes that the variances of the differences between all combinations of related groups (levels) are equal. In simpler terms, it assumes that the spread or dispersion in one condition is the same in all other conditions.
When Can Post Hoc Tests be used in RM ANOVA
Tukey’s can be used when sphericity is not violated
What can be used in place of post hoc analysis of RM ANOVA if sphericity is violated?
dependent t test with bonferroni correction
Factorial ANOVA
Analyize multiple factors on the DV simultaneously (FACTORS AND LEVELS)
Main Effects
Individual factor F values
Interactions
Combined F values
A graph of an ANOVA with no interaction should show lines that are
Always parallel and evenly spaced (same variance)
A graph of an ANOVA with an interaction should show lines that
Are not always parallel or evenly spaced (different variance)
Types of Factorial ANOVA
- Between- Between
- Between- Within (Mixed)
- Within – Within
What is an ANCOVA
Analysis of Covariance: Adjusts the DV for the covariates allowing you to asses the effect of the IV on the DV while controlling the effects of covariates
combo of regression and ANOVA
Covariate
A variable that might affect the DV but is not the variable of interest
Axis of DV
Y
Axis of IV
x
When do you use an ANCOVA?
When analyzing the effects of multiple IVs on DV in same model
-1 or more interval / ratio IV (performance rating 1-10)
-1 or more nominal IV (Time to finish)
ANCOVA assumptions
homogeneity of regression
- slope between covariate and DV similar
If the slopes of a regression are not parallel what does that mean for ANCOVA?
assumption is violated ANCOVA not safe to use
Reliability
does the test measure what its supposed to
Inter rater reliability
are the outcomes consistent from researcher to reseracher for same subject
intrarater reliability
are the outcomes consistent if the same rater administers the test to a given subject
Test retest reliability
are test scores fromt eh same subjects similar between multiple occasions of taking the test.
ICC
reliability measurement
Intraclass Correlation Coefficient: reliability coefficeint
true score variance / total variance
how consistent ratings are
what is required for ICC?
variance terms from RM ANOVA
SEM
Standard Error of Measurement
reliability measurement
measure of the precision of individual test scores
Nonparametric Tests
Distribution free
generally less powerful
do not make assumptions about the distributions of the population
used when data does not fit the criteria for parametric tests
Examples of nonparametric tests
Mann-Whitney U Test
Kruskall-Wallis H test
Spearmans’ Rank Correlation Coefficient
Chi-Square Test of Independence
What does a Chi Square Test compare?
two or more sets of NOMINAL data that have been arranged by frequency
significant association between two categorical variables
Spearman Rho
(p) ORDINAL
non parametric equivalent to pearson r
Mann-Whiteney U test
quantifies relationship between two sets of ordinal data
Non parametric equivalent of indepenent T test
Kruskal Wallis ANOVA
Quantifies difference between more than two groups of RANKED data
Nonparametric equivalent to the one way ANOVA
Friedman’s two way ANOVA
Quantifes the difference betwen RANKED data when measured on subjects three or more times
Nonparametric equivalent to RM ANOVA
IF THE P VALUE IS LESS THAN THE ALPHA LEVEL!!!!
REJECT HO!!!!!
Meta Analysis
Procedure that allows an investigator to statistically combine the results of multiple studies