Contrasts Flashcards

1
Q

What is the equation for familywise error rate?

A

If the error rate per conparison = alpha pc
Number of comparisons =c
Alpha familywise level = 1-(1-alpha pc) c (this should be subscript)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A priori comparisons are what?

A

Are planned before looking at the data and are sometimes called planned contrasts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can one conduct multiple comparisons between individual group means if the omnibus F is non-significant?

A

Yes

  • the logic behind most of the multiple comparison procedures does not require overall significance first
  • requiring overall sig. will change the alpha familywise making the multiple comparison tests conservative
  • multiple comparisons often address the actual hypothesis more directly
  • some have argued that the seems little reason for applying the overall F test when planned multiple comparisons are being carried out.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When carrying out planned multiple comparisons do you need to protect for type 1 error rate inflation?

A

Just because they are planned makes no difference to the problem of type 1 error rate inflation in multiple comparisons PROTECTION ESSENTIAL!

If planned comparisons are a subset of all possible comparisons (I.e. You only plan to look where you think there will be an effect) then type 1 error inflation will be less then for post hoc comparisons hence any correction will reduce power less, thus planned are better than post hoc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What methods are there for a priori comparisons?

A
  • multiple t-tests
  • linear contrasts (orthogonal and non-orthogonal)

Bonferroni t-test (dunns test and its variations)

  • Dunn-sidak test
  • multistage bonferroni procedure
    Holms test
    Larzelere and mulaik’s test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the benefits of running multiple t-tests as comparisons?

A

Simplest method of running planned comparisons between pairs of means

Only useful if the number of comparisons are limited and planned in advance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When running multiple t-test comparisons if the homogeneity of variance is found what do you use to evaluate t?

A

Use MSerror from overall ANOVA

Evaluate t with DFerror degrees of freedom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When running multiple t-test comparisons if the heterogeneity of variance is found but you have equal group sizes what do you use to evaluate t?

A

Use sum of individual sample variances instead of MSerror

Evaluate t with DF=2(n-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When running multiple t-test comparisons if the heterogeneity of variance is found but you have unequal group sizes what do you use to evaluate t?

A

Use individual sample variances

Evaluate t with DF given by the welch-satterthwaite solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the difference between t-test comparisons and linear contrasts

A

T-tests: compare one mean with another mean

Linear contrasts: compare one mean or set of means combined, with another mean or set of means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If you have high medium and low dose of a treatment plus a placebo and you wanted to run the comparison of treatment vs placebo what numbers for the comparison could you use?

A

High 1/3
Medium 1/3
Low 1/3
Placebo = -1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the F test = to?

A

t squared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the choice of coefficients used in a linear contrast

A
  • to form the two sets of treatments which are the two sides of a contrast analysis
  • assign as weights to one of the groups a fraction that corresponds with the number in the group say you have 3 treatment groups then assign them all 1/3 and if you have a placebo and a control on the other side assign them both 1/2

Then add a minus sign to one side e.g -1/2 and -1/2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are orthogonal contrasts?

A

They are a set of contrasts that are mutually independent of one another

Sums of squares of a complete set of orthogonal contrasts sum together to sum of squares of the treatment (this additive property is not valid for non-orthogonal contrasts)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

There are three criteria for orthogonal contrasts what are they?

A
  1. Sum of the sets of coefficients = 0
  2. Sum of the product of the coefficients (a*b) = 0
  3. Number of comparisons s=DFtreat
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the simple rules for orthogonal contrasts?

A
  • if a group is singled out in one comparison, then us should not reappear in another comparison
  • one fewer contrasts then the number of groups (i.e. K-1 contrasts for K groups)
  • each contrast must compare only two “chunks” of variance
  • first comparison: compare all
    Of the experimental groups with the control group or groups

Successive comparisons : within experimental or control groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What numbers could you assign to a five group (E1 E2 E3 C1 C2) orthogonal contrast ?

A

Contrast 1
E1 E2 E3 = 2 2 2 ~ C1 C2 = -3 -3

Contrast 2
E1 E2 = 1 1 E3 = -2

Contrast 3
E1 = 1 E2= -1

Contrast 4
C1 = 1 and C2 = -1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do you check the orthogonal its of a set of contrasts?

A

You have to sum the cross- products of the coefficients for every pair of contrasts if this equals 0, this shows the contrasts are uncorrelated

So you would have to draw a table and fill it in with all of your contrasts say you have 3 you would then have to work out contrast 1 &2, contrasts 1 and 3 and contrasts 2 and 3 if you have 4 groups
If contrast 1 = 3 -1 -1 -1
Contrast 2 = 0 2 -1 -1

To work out 1 and 2
30 + -12 + -1-1 + -1-1 = 0

And then you would keep doing this for the contrasts of 1 and 3 and 1 and 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What’s is booles inequality?

A

The probability of occurrence of at least one of a set of events can never exceed the sum of their individual probabilities => bonferroni set bounds on this inequality

So for example
Three comparisons, each with probability of alpha =0.05; the probability of at least one type 1 error can never exceed (0.05+ 0.05+0.05)=0.15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

If C = number of comparisons and
ALPHApc = the probability of a type 1 error per comparison
And adjusted ALPHApc = ALPHApc_adj

The alpha family wise rate should be equal to or less than what?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What test is very similar to bonferroni test?

A

Dunn-sidak test

Bonferroni test =

Alpha level / number of tests

E.g 0.05/4 = 0.0125

Dunn-sidak test = 0.0127

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the multistage bonferroni: holm procedure?

A

For multiple hypothesis test and for controlling FW error rate

Calculate t for all contrasts of interest

Arrange the t values in increasing order

Check the first and lathers of t against the critical value in dunns table corresponding to c contrasts (= alpha/c)

If sig. Then next largest statistic has correction based on (c-1) comparisons ( = alpha/ (c-1))

Stop when non significant results is found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What has been a proposed alternative for controlling familywise error rate?

A

False discovery rate (Benjamini and hochberg, 1995)

It controls the expected proportions of falsely rejected hypothesis (type 1 errors) among the list of rejected null hypothesis

V = the number of true hypothesis rejected
R = the total number of hypothesis rejected

False discovery rate = V/R

If R >0 then use 0

& one wants to keep this value below a certain threshold

24
Q

What is the intuitive logic that underlies the false discovery rate?

A

If all tested null hypothesis are true, controlling the false discovery rate controls the traditional familywise error rate

When many of the tested null hypotheses are rejected, it is preferable to control the proportion of errors rather than the probability of making even one error

We can bear more errors when many null hypotheses are rejected, but can tolerate fewer errors when fewer nulls are rejected

25
Q

When any hypotheses are being tested what is the best form of comparison?

A

Post hoc tests are usually more powerful than bonferroni t-test

But there is a trade off between controlling the familywise error rate and loss of statistical power

  • stricter condition (very low alpha) reduces type 1 error,
  • too conservative alpha leads to type 2 error
26
Q

What are three relevant questions to ask when considering post hoc comparisons?

A
  • does the test control the type 1 error rate?

Does the test control the type 2 error rate?

Is the test reliable when the test assumptions of (M) ANOVA have been violated?

27
Q

There are several post hoc tests list them

A

Fishers least significant difference (LSD)

Studentised Newman-keuls test

Tukeys honestly significant difference (HSD) test

Scheffe test

Ryan procedure (REGWQ)

28
Q

Describe fishers least significance difference (LSD) post hoc test

A

One of the oldest post hoc tests, also known as fishers protected t

Similar to multiple t tests (type 1 error rate is less controlled)

Requires overall ANOVA to be significant

29
Q

Describe the studentised Newman-keuls test

A

Less control over type 1 error rate but performs well with limited number of comparisons

Good statistical power

Lack of confidence interval

30
Q

Describe tukeys honestly significant difference (LSD) test

A

Most possibly the safest test for multiple pairwise comparisons yet keeping the family wise error rate down
Conservative
More powerful than bonferroni for larger number of comparisons but less powerful for smaller number

31
Q

Describe the Scheffe test

A

Unlike many post hoc tests, not restricted to pairwise comparison

Valid for any (unplanned) comparisons as long as expressible in contrast forms (most flexible)

Very low statistical power

32
Q

Describe Ryan procedure (REGWQ)

A

stronger statistical power

Tighter control over type 1 error rate

More suitable for equal sample sizes

33
Q

List three practical issues that surround post hoc tests

A

Relatively robust against non- normality

Sensitive to unequal group sizes

Sensitive when population variances are different

34
Q

Which post hoc test do you choose for equal sample sizes and equal population variances?

A

Tukeys HSD test

Bonferroni to quarantine control over type 1 error

35
Q

Which post hoc test do you choose for unequal sample sizes?

A

Hochbergs GT2

36
Q

Which post hoc test do you choose for unequal population variances?

A

Games-howell procedure

37
Q

What types of contrasts for factorial designs are there?

A

Trends

Simple main effects

Interaction contrasts

Simple interaction effects

Simple simple main effects

38
Q

In spss what can you use to request post hoc or contrasts?

A

GLM univariate, multivariate and repeated measures each have buttons for
“Post hoc”
“Contrasts”

Also can use LMATRIX (between subjects contrasts) and MMATRIX (repeated measures) sub commands within GLM syntax

39
Q

Spss has predefined between subjects contrasts what is a deviation?

A

Each level except one compared to the overall effect

40
Q

Spss has predefined between subjects contrasts what is a simple?

A

Each level compared to the reference level

41
Q

Spss has predefined between subjects contrasts what is a difference?

A

Each level compared to the mean of previous “reverse helmert)

42
Q

Spss has predefined between subjects contrasts what is a helmert?

A

Each level compared to the mean of the subsequent

43
Q

Spss has predefined between subjects contrasts what is a repeated?

A

Each level compared to the previous level

44
Q

Spss has predefined between subjects contrasts what is a polynomial?

A

Used for trends; n-1 available for factor with n levels

45
Q

In a simple contrast output spss also produces an output for all of the contrasts combined, what is this the same as?

A

The same as the overall group effect

46
Q

What are 2 kinds of type 1 error rate?

A
  • per comparison error rate (PC) = probability of making a type 1 error on any comparison
  • familywise (or experimentwise) error rate (FW) = probability of making at least one type 1 error in a family of comparisons
47
Q

What is the nonparametric equivalent of independent groups t-test and one way between subjects ANOVA (2+ levels)

A

Mann- Whitney u test/ wilcoxon rank sum test

Kruskal-Wallis test (2+ levels)

48
Q

What is the nonparametric equivalent of matched samples (paired) t-test and one way repeated measures subjects ANOVA (2+ levels)

A

Wilcoxon matched-pairs signed ranks test

Friedman analysis of variance by ranks

49
Q

How do you work out the t value from a contrasts output from spss?

A

T= contrast estimate/standard error

And spss already gives you the significance level

50
Q

There are some strict rules for writing out higher-order effect contrasts, what are these?

A
  1. The order which we write the factors must be the same order as they are specified in the overall model
  2. Then, we must write out the coefficients for each combination of factors, the order of these are defined by the order of the factors, the firsts factor will be the one that changes more slowly.

E.g. If we right extrgp*reinforcement in the overall model (at the top of the syntax)

(Extrgp = 2 levels & reinfo = 3 levels)

The order would be
e1r1 e1r2 e1r3 e2r1 e2r2 e2r3

  1. Coefficients in higher order effects must add up to the coefficients in the lower order effects
  2. To conform with point 3, we cannot use recurring decimals such as .3333 we must use 1/3
51
Q

What are the higher and lower order effects in contrast?

A

The lower effects refers to just comparing simple groups

Higher order refers to interactions

52
Q

When looking at the coefficients specified in syntax, how can you tell whether interactions or main effects are being calculated?

A

If the coefficients add up to zero then only interactions are being calculated not main effects

53
Q

If you are comparing high extroversion and low extroversion on rewards on a task and you predict high extroversion in the reward group should increase in accuracy, you give the high ex a coefficient of 1 and low ex a coefficient of minus 1 - therefore high extroverts (as they were given a positive coefficient) should be higher than low extroverts (introverts). Where do you need to look in the spss output to check if your results are as predicted?

A

The contrast estimate should be positive to reflect this bigger score in extroverts

54
Q

When testing a group main effect how can you work out how many contrasts you can have?

A

N-1 DF for the group factor, which can be broken down into n-1 single contrasts so if you have 4 groups = 3df and 3 contrasts

55
Q

What is a trend contrast and give an example

A

A trend contrast is meaningful when the level of the factors are meaningful, they have a meaningful relationship to each other

Group might be age
Children, teenagers, adults and middle age adults

Trends can tell the researcher if the is a significant linear increase or decrease in the DV as a function of group (age) this is known as a linear trend

It can also show curvilinear trends (quadratic, cubic) which could show non linear changes in the DV across ordered age groups

56
Q

What are simple main effects? And give an example

A

A simple main effect is the effect of one factor at a particular level of another factor in a factorial design.

2x2 design  (factor a and b)
Might be Ab AB ab aB

SME of factor A would compare Ab with ab (or
AB with aB)
SME of factor B would compare ab with ab with aB (or Ab with AB)

SME are particularly useful in trying to establish the nature/location of a complex effect in a factorial design, in particular interactions between factors