Contrasts Flashcards
What is the equation for familywise error rate?
If the error rate per conparison = alpha pc
Number of comparisons =c
Alpha familywise level = 1-(1-alpha pc) c (this should be subscript)
A priori comparisons are what?
Are planned before looking at the data and are sometimes called planned contrasts
Can one conduct multiple comparisons between individual group means if the omnibus F is non-significant?
Yes
- the logic behind most of the multiple comparison procedures does not require overall significance first
- requiring overall sig. will change the alpha familywise making the multiple comparison tests conservative
- multiple comparisons often address the actual hypothesis more directly
- some have argued that the seems little reason for applying the overall F test when planned multiple comparisons are being carried out.
When carrying out planned multiple comparisons do you need to protect for type 1 error rate inflation?
Just because they are planned makes no difference to the problem of type 1 error rate inflation in multiple comparisons PROTECTION ESSENTIAL!
If planned comparisons are a subset of all possible comparisons (I.e. You only plan to look where you think there will be an effect) then type 1 error inflation will be less then for post hoc comparisons hence any correction will reduce power less, thus planned are better than post hoc.
What methods are there for a priori comparisons?
- multiple t-tests
- linear contrasts (orthogonal and non-orthogonal)
Bonferroni t-test (dunns test and its variations)
- Dunn-sidak test
- multistage bonferroni procedure
Holms test
Larzelere and mulaik’s test
What are the benefits of running multiple t-tests as comparisons?
Simplest method of running planned comparisons between pairs of means
Only useful if the number of comparisons are limited and planned in advance
When running multiple t-test comparisons if the homogeneity of variance is found what do you use to evaluate t?
Use MSerror from overall ANOVA
Evaluate t with DFerror degrees of freedom
When running multiple t-test comparisons if the heterogeneity of variance is found but you have equal group sizes what do you use to evaluate t?
Use sum of individual sample variances instead of MSerror
Evaluate t with DF=2(n-1)
When running multiple t-test comparisons if the heterogeneity of variance is found but you have unequal group sizes what do you use to evaluate t?
Use individual sample variances
Evaluate t with DF given by the welch-satterthwaite solution
Explain the difference between t-test comparisons and linear contrasts
T-tests: compare one mean with another mean
Linear contrasts: compare one mean or set of means combined, with another mean or set of means
If you have high medium and low dose of a treatment plus a placebo and you wanted to run the comparison of treatment vs placebo what numbers for the comparison could you use?
High 1/3
Medium 1/3
Low 1/3
Placebo = -1
What is the F test = to?
t squared
Explain the choice of coefficients used in a linear contrast
- to form the two sets of treatments which are the two sides of a contrast analysis
- assign as weights to one of the groups a fraction that corresponds with the number in the group say you have 3 treatment groups then assign them all 1/3 and if you have a placebo and a control on the other side assign them both 1/2
Then add a minus sign to one side e.g -1/2 and -1/2
What are orthogonal contrasts?
They are a set of contrasts that are mutually independent of one another
Sums of squares of a complete set of orthogonal contrasts sum together to sum of squares of the treatment (this additive property is not valid for non-orthogonal contrasts)
There are three criteria for orthogonal contrasts what are they?
- Sum of the sets of coefficients = 0
- Sum of the product of the coefficients (a*b) = 0
- Number of comparisons s=DFtreat
What are the simple rules for orthogonal contrasts?
- if a group is singled out in one comparison, then us should not reappear in another comparison
- one fewer contrasts then the number of groups (i.e. K-1 contrasts for K groups)
- each contrast must compare only two “chunks” of variance
- first comparison: compare all
Of the experimental groups with the control group or groups
Successive comparisons : within experimental or control groups
What numbers could you assign to a five group (E1 E2 E3 C1 C2) orthogonal contrast ?
Contrast 1
E1 E2 E3 = 2 2 2 ~ C1 C2 = -3 -3
Contrast 2
E1 E2 = 1 1 E3 = -2
Contrast 3
E1 = 1 E2= -1
Contrast 4
C1 = 1 and C2 = -1
How do you check the orthogonal its of a set of contrasts?
You have to sum the cross- products of the coefficients for every pair of contrasts if this equals 0, this shows the contrasts are uncorrelated
So you would have to draw a table and fill it in with all of your contrasts say you have 3 you would then have to work out contrast 1 &2, contrasts 1 and 3 and contrasts 2 and 3 if you have 4 groups
If contrast 1 = 3 -1 -1 -1
Contrast 2 = 0 2 -1 -1
To work out 1 and 2
30 + -12 + -1-1 + -1-1 = 0
And then you would keep doing this for the contrasts of 1 and 3 and 1 and 2
What’s is booles inequality?
The probability of occurrence of at least one of a set of events can never exceed the sum of their individual probabilities => bonferroni set bounds on this inequality
So for example
Three comparisons, each with probability of alpha =0.05; the probability of at least one type 1 error can never exceed (0.05+ 0.05+0.05)=0.15
If C = number of comparisons and
ALPHApc = the probability of a type 1 error per comparison
And adjusted ALPHApc = ALPHApc_adj
The alpha family wise rate should be equal to or less than what?
What test is very similar to bonferroni test?
Dunn-sidak test
Bonferroni test =
Alpha level / number of tests
E.g 0.05/4 = 0.0125
Dunn-sidak test = 0.0127
What is the multistage bonferroni: holm procedure?
For multiple hypothesis test and for controlling FW error rate
Calculate t for all contrasts of interest
Arrange the t values in increasing order
Check the first and lathers of t against the critical value in dunns table corresponding to c contrasts (= alpha/c)
If sig. Then next largest statistic has correction based on (c-1) comparisons ( = alpha/ (c-1))
Stop when non significant results is found
What has been a proposed alternative for controlling familywise error rate?
False discovery rate (Benjamini and hochberg, 1995)
It controls the expected proportions of falsely rejected hypothesis (type 1 errors) among the list of rejected null hypothesis
V = the number of true hypothesis rejected R = the total number of hypothesis rejected
False discovery rate = V/R
If R >0 then use 0
& one wants to keep this value below a certain threshold
What is the intuitive logic that underlies the false discovery rate?
If all tested null hypothesis are true, controlling the false discovery rate controls the traditional familywise error rate
When many of the tested null hypotheses are rejected, it is preferable to control the proportion of errors rather than the probability of making even one error
We can bear more errors when many null hypotheses are rejected, but can tolerate fewer errors when fewer nulls are rejected
When any hypotheses are being tested what is the best form of comparison?
Post hoc tests are usually more powerful than bonferroni t-test
But there is a trade off between controlling the familywise error rate and loss of statistical power
- stricter condition (very low alpha) reduces type 1 error,
- too conservative alpha leads to type 2 error
What are three relevant questions to ask when considering post hoc comparisons?
- does the test control the type 1 error rate?
Does the test control the type 2 error rate?
Is the test reliable when the test assumptions of (M) ANOVA have been violated?
There are several post hoc tests list them
Fishers least significant difference (LSD)
Studentised Newman-keuls test
Tukeys honestly significant difference (HSD) test
Scheffe test
Ryan procedure (REGWQ)
Describe fishers least significance difference (LSD) post hoc test
One of the oldest post hoc tests, also known as fishers protected t
Similar to multiple t tests (type 1 error rate is less controlled)
Requires overall ANOVA to be significant
Describe the studentised Newman-keuls test
Less control over type 1 error rate but performs well with limited number of comparisons
Good statistical power
Lack of confidence interval
Describe tukeys honestly significant difference (LSD) test
Most possibly the safest test for multiple pairwise comparisons yet keeping the family wise error rate down
Conservative
More powerful than bonferroni for larger number of comparisons but less powerful for smaller number
Describe the Scheffe test
Unlike many post hoc tests, not restricted to pairwise comparison
Valid for any (unplanned) comparisons as long as expressible in contrast forms (most flexible)
Very low statistical power
Describe Ryan procedure (REGWQ)
stronger statistical power
Tighter control over type 1 error rate
More suitable for equal sample sizes
List three practical issues that surround post hoc tests
Relatively robust against non- normality
Sensitive to unequal group sizes
Sensitive when population variances are different
Which post hoc test do you choose for equal sample sizes and equal population variances?
Tukeys HSD test
Bonferroni to quarantine control over type 1 error
Which post hoc test do you choose for unequal sample sizes?
Hochbergs GT2
Which post hoc test do you choose for unequal population variances?
Games-howell procedure
What types of contrasts for factorial designs are there?
Trends
Simple main effects
Interaction contrasts
Simple interaction effects
Simple simple main effects
In spss what can you use to request post hoc or contrasts?
GLM univariate, multivariate and repeated measures each have buttons for
“Post hoc”
“Contrasts”
Also can use LMATRIX (between subjects contrasts) and MMATRIX (repeated measures) sub commands within GLM syntax
Spss has predefined between subjects contrasts what is a deviation?
Each level except one compared to the overall effect
Spss has predefined between subjects contrasts what is a simple?
Each level compared to the reference level
Spss has predefined between subjects contrasts what is a difference?
Each level compared to the mean of previous “reverse helmert)
Spss has predefined between subjects contrasts what is a helmert?
Each level compared to the mean of the subsequent
Spss has predefined between subjects contrasts what is a repeated?
Each level compared to the previous level
Spss has predefined between subjects contrasts what is a polynomial?
Used for trends; n-1 available for factor with n levels
In a simple contrast output spss also produces an output for all of the contrasts combined, what is this the same as?
The same as the overall group effect
What are 2 kinds of type 1 error rate?
- per comparison error rate (PC) = probability of making a type 1 error on any comparison
- familywise (or experimentwise) error rate (FW) = probability of making at least one type 1 error in a family of comparisons
What is the nonparametric equivalent of independent groups t-test and one way between subjects ANOVA (2+ levels)
Mann- Whitney u test/ wilcoxon rank sum test
Kruskal-Wallis test (2+ levels)
What is the nonparametric equivalent of matched samples (paired) t-test and one way repeated measures subjects ANOVA (2+ levels)
Wilcoxon matched-pairs signed ranks test
Friedman analysis of variance by ranks
How do you work out the t value from a contrasts output from spss?
T= contrast estimate/standard error
And spss already gives you the significance level
There are some strict rules for writing out higher-order effect contrasts, what are these?
- The order which we write the factors must be the same order as they are specified in the overall model
- Then, we must write out the coefficients for each combination of factors, the order of these are defined by the order of the factors, the firsts factor will be the one that changes more slowly.
E.g. If we right extrgp*reinforcement in the overall model (at the top of the syntax)
(Extrgp = 2 levels & reinfo = 3 levels)
The order would be
e1r1 e1r2 e1r3 e2r1 e2r2 e2r3
- Coefficients in higher order effects must add up to the coefficients in the lower order effects
- To conform with point 3, we cannot use recurring decimals such as .3333 we must use 1/3
What are the higher and lower order effects in contrast?
The lower effects refers to just comparing simple groups
Higher order refers to interactions
When looking at the coefficients specified in syntax, how can you tell whether interactions or main effects are being calculated?
If the coefficients add up to zero then only interactions are being calculated not main effects
If you are comparing high extroversion and low extroversion on rewards on a task and you predict high extroversion in the reward group should increase in accuracy, you give the high ex a coefficient of 1 and low ex a coefficient of minus 1 - therefore high extroverts (as they were given a positive coefficient) should be higher than low extroverts (introverts). Where do you need to look in the spss output to check if your results are as predicted?
The contrast estimate should be positive to reflect this bigger score in extroverts
When testing a group main effect how can you work out how many contrasts you can have?
N-1 DF for the group factor, which can be broken down into n-1 single contrasts so if you have 4 groups = 3df and 3 contrasts
What is a trend contrast and give an example
A trend contrast is meaningful when the level of the factors are meaningful, they have a meaningful relationship to each other
Group might be age
Children, teenagers, adults and middle age adults
Trends can tell the researcher if the is a significant linear increase or decrease in the DV as a function of group (age) this is known as a linear trend
It can also show curvilinear trends (quadratic, cubic) which could show non linear changes in the DV across ordered age groups
What are simple main effects? And give an example
A simple main effect is the effect of one factor at a particular level of another factor in a factorial design.
2x2 design (factor a and b) Might be Ab AB ab aB
SME of factor A would compare Ab with ab (or
AB with aB)
SME of factor B would compare ab with ab with aB (or Ab with AB)
SME are particularly useful in trying to establish the nature/location of a complex effect in a factorial design, in particular interactions between factors