Extras Flashcards
What is the square of the standard deviation?
The variance
Levenes test for equality of variances is an assumption for what?
Independent samples T -test
One- way ANOVA
Should not be sig. Report the second line if it is sig.
How do you get a one tailed probability from your p value?
Divide by 2
What is the normal error term used for the standard deviation of the sampling distribution of mean difference used in a t-test called
The standard error of mean difference (SEDMest)
Square root of
Variance 1 / n1 + variance 2 /n2
Variance of the sample is used which makes this error term an estimate (population variances are rarely known)
This is used to check how likely our mean difference is under our null hypothesis
What is the pooled error term?
This is used when the sample sizes are unequal in an independent t-test.
This is because each sample is contributing differently to the estimates of the variance in the sampling distribution. So one sample may give you more information than another.
The pooled term weights the variances by the degrees of freedom
How to calculate the DF for the error term for a simple t-test
Two parameters have been estimated (variance1 and variance2)
So you subtract 2 from the total sample size
N1+N2 -2
What are the assumptions of independent samples t-test?
Population
- normally distributed
- have the same variance
Sample
- independent, no two measures are drawn from the same participant
- independent random sampling (no choosing of respondents in any kind of systematic basis)
Data (DV scores)
- at least 2 observations per sample (factor level)
Measured using a continuous scale (interval or ration)
Homogeneity of variance Levees tests (with equal sample sizes heterogeneity of variance and mild non-normality is no problem, e.g dice example)
If the groups are skewed in opposite direction can force a…
Non parametric alternative
Mann-Whitney u test : wilcoxon rank-sum test
Parametric tests are…
Calculated using an estimate of the population parameters from the samples
More restrictive, because a range of assumptions must be met? However they are generally robust to violations
In addition they are more powerful, thus we generally use a parametric test unless the assumptions are not met
What is the F ratio?
F = MS effect / MS error
MS = mean square
F ratio is a a ration of the systematic variance (i.e. Your experimental manipulation) to the unsystematic variance
If you square a t statistic what do you get?
F (1, DF)
Conventionally ANOVA is never 1-tailed so choose a t-test if you want this
When you run a one way ANOVA and you get the output table the mean square box relating to the between groups = what?
MS effect
Mean square
When you run a one way ANOVA and you get the output table the mean square box relating to the within groups = what?
MS error
Mean square
How to calculate the DF for the MS effect?
Number of groups - 1
What is the MSerror term?
It is a pooled variance term, it’s a weighted average of the k sample variances
Aka MS within as it estimates variance within groups
An estimate of the error variance within the population whether or not the null is true or false
What is MS effect?
Variance of the k sample means multiplied by N
Estimated variance among, or between means
An estimate of the population variance when the null is true
So if null is true exp. MS effect = exp. MS error and F = 1
If null is false exp. MS effect > exp. MS error and F > 1
How do you calculate the DF total for the whole ANOVA?
N - 1
N = total number of participants
How do you calculate DF effect?
K - 1
K = number of groups
How do you calculate DF error?
K (n-1)
K = number of groups N = participants in one group when group = equal
What are the extra assumptions for ANOVA?
Homogeneity of population variances (variance in each of the k populations samples is the same) - levenes test (equality of error variance, we want this to be not sig. Correcting we can you boxs test… But it’s conservative so you could also transform/trim raw data).
Robustness of above
If largest variance 30)
Independence of observations (each observation is independent of every other, we randomly sample/assign, error terms are independent)
What is an omnibus test?
Any test resulting from the preliminary partitioning of variance in ANOVA, but doesn’t tell us where the effect lies.
Why is the probability of making a type 1 error is generally higher for post hoc comparisons than a priori?
As you are usually making more comparisons
What is a type 1 error?
False positive… Find an effect that isn’t real
There are two types of type 1 error rate what are they?
Per comparison rate (alpha pc or alpha)= probability of type 1 error on and single comparisons (e.g. .05)
Familywise error rate (alpha fw) = probability of making at least 1 type 1 error in a family (or set) of comparisons.
Alpha fw = 1 - (1 - alpha pc) c (this should be the power of c)
Where c is the number comparisons made and where comparisons are assumed to be independent
Explain the familywise error rate through the example of a dice
Think of each comparison as being like rolling a fair die and imagine a type 1 error is a 6
Each comparison is one roll
What are be odds of getting a 6 on the first roll
(1/6 and 5/6 of not)
Make another roll (nb each roll is the die is independent of any other)
What are the odds now (1/6 and 5/6 the same)
So what are the odds of not getting a 6 at all over the 2 rolls
(5/6) * (5/6) = .68 or 68%
So the chance of getting at least one 6 = 32%
(Over 20 throws there is a very high chance of getting a 6 - 98% or something so this demonstrates type 1 error)
Most methods use a correction which maintains alpha familywise at p
Bonferroni correction basically evaluates t at a
More conservative level
Alpha pc = alpha/comparisons
Aka sidak or Dunn-sidak (esp. For t test)
Alpha PC = 1 - (1 - alpha) to 1/c