Group difference Flashcards

1
Q

What are the two types of group when measuring group difference?

A
  1. Independent (mutually exclusive)

2. Dependent (mutually paired)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are examples of mutually paired (dependent) groups?

A
  1. Same person being measured twice

2. two people bound in some way (husband-wife)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Dependent groups can be different sizes, T/F

A

FALSE!

Think about it - same person getting measured twice etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Independent groups can be difference sizes, T/F

A

TRUE!

But if they are different sizes it means the design is imbalanced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In independent group design, can a participant belong to more than one group?

A

No!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the relevant assumptions when investigating mean differences between two INDEPENDENT groups (3)

A
  1. Observations are independent
  2. Observed scores are normally distributed
  3. Variances in the two groups are the same (homogeneity of variance assumption)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

With INDEPENDENT groups, there is a circumstance in which it doesn’t matter so much if assumption #3 (homogeneity of variance) is violated… what is that circumstance?

A

When the design is BALANCED

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you test homogeneity of variance assumption?

There are two ways, you know em

A
  1. Levene’s test

2. Fligner-Killeen’s test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You’re doing independent or dependent group difference…

your observed scores are not normally distributed…

do you use standardised or unstandardised CIs…?

A

Unstandardised!

Ironically

These are robust against mild-to-moderate non-normality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

So you want to standardise your group differences…

there are two ways to do this, what are they?

A

Bonett’s squiggle

Hedge’s g

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Of the two ways to standardise group difference - Hedge’s g and Bonett’s squiggle - and both of them require the observations to be normal.

But one of them also needs the variance to be homogenous. Which one needs the homogeneity… of the variance…? WHICH ONE?!

A

Hedge’s g!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When testing for homogeneity of variance using Fligner-Killeen and Levene’s, what are you actually looking for, actually…?

A

p values

And you want em to be big

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You’re looking at the Fligner-Killeen test result and it says p = .25… what does it mean?!

A

It means your variance is homogenous and you can finally relax

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You’re looking at the following results:

Levene’s: p = .04
Fligner Killeen: p = .18

What should you do?

A

Its inconclusive so you should be conservative and assume HETEROgeneity of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When doing DEPENDENT groups, what is the name of the score you care about

A

The Difference Score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the relevant assumptions the investigating means differences between two DEPENDENT groups (2)

A
  1. Observations are independent

2. Observed scores are normally distributed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why don’t you need to worry about homogeneity of variance when doing DEPENDENT group comparisons?

A

Nobody knows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When applying contrast weights, is it important to consider which weight to make positive and which to make negative, or is this decision entirely arbitrary?

A

It’s arbitrary…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When using contrast weights and comparing group difference, what does can you claim when your design is balanced and your contrast weights are orthogonal?

A

You can claim that…

  1. the mean differences do not overlap and
  2. do not contain redundancies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

When looking at OBSERVED mean difference scores for two independent groups, what’s the ACTUAL rule for when to look at the EQUAL vs UNEQUAL variance output from R?

A

Counterintuitively, you the thing you actually have to look at is the normality of the distribution.

When the distribution is normal (or at least, moderately normal), then you should read the ‘EQUAL’ output…

EXCEPT when the design is UNBALANCED and the variance is UNEQUAL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

When looking at STANDARDISED mean difference scores for two independent groups, what are the ACTUAL rules for when to look at Hedge’s G and Bonnett’s d, and when to just give up entirely?

A
  1. If the distribution is non-normal then just give up (your CIs won’t be robust)

Assuming your distribution is normal… then

  1. If variances are equal, go for Hedge’s d
  2. If variances are unequal, go for Bonnett’s d
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In R, what does the eff.ci() function give you?

A

Standardised mean differences for a two group (independent?) one way design… using contrast weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In R, when looking at the output of a eff.ci() function (for a two group one way design), what is the ‘observed mean contrast’

A

This is the OBSERVED difference between the two groups you made up using contrast weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are two other terms for describing a ‘dependent groups’ design?

A
  1. a ‘within subjects’ design

2. a ‘repeated measures’ design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are two other terms fo describing a ‘within subjects’ design?

A
  1. a ‘dependent groups’ design

2. a ‘repeated measures’ design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are two other terms of describing a ‘repeated measures’ design?

A
  1. a ‘within subjects’ design

2. a ‘dependent groups’ design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Name two common applications of dependent groups / within subjects / repeated measure designs?

A
  1. multiple measures across time

2. a single group being measured after multiple stimulus (ie reactions to three different images)

28
Q

Name a circumstance in which a qqplot may not be helpful in ascertaining normality?

A

When the sample size is teeny weeny

29
Q

Spehericity means…

A

The variances of all possible difference scores between pairs of three or more within-subject conditions (or levels) being homogeneous at a population level.

30
Q

WTF is ‘compound symmetry’

A

A covariance matrix that has the same variance in each diagonal element and the same covariance in every off-diagonal element of the matrix.

PICTURE THE MATRIX

31
Q

How does ‘compound symmetry’ relate to any of this?

A

If you have compound symmetry, by definition you have sphericity

32
Q

Sphericity can be calculated from a covariance matrix, T/F

A

TRUE

33
Q

Sphericity as a population parameter is notated as…

A

Epsilon

34
Q

Epsilon’s lower bound value is what?

A

1/the number of levels in the within subjects factor

35
Q

In the context of dependent groups…

Perfect Sphericity equals…

A

1

that is, the smaller the number the lower the sphericity

36
Q

In the context of dependent groups…

What are the names of the two sphericity estimators?

A
  1. Greenhouse-Geissner (epsilon hat)

2. Huynh-Feldt (epsilon with tilde)

37
Q

In the context of dependent groups…

Greenhouse-Geissner and Huynh-Feldt are examples of what?

A

Sphericity estimators

38
Q

In the context of dependent groups…

Of the two sphericity estimators - Greenhouse-Geissner and Huynh-Feldt - which is the more conservative?

A

Greenhouse-Geissner (epsilon hat)

39
Q

In the context of dependent groups…

What are orthogonal polynomial contrasts, and what are they used for?

A

They are a default set of inbuilt contrasts in R

They are used when looking at changes over time to separate out…

the

  • linear
  • quadratic
  • cubic and
  • the quartic
40
Q

In the context of dependent groups…

Orthogonal polynomials provide a complete explanation of all the possible ways in which change is occurring

A

Okay

41
Q

In the context of dependent groups…

When we define the key variable as being ‘ordered’,, R will automatically general polynomial coefficients when undertaking an ANOVA

A

How interesting

42
Q

Precise meaning of ‘p value’

A

p = PR(Tobs | H0 = True)

43
Q

Precise meaning of Type 1 error

A

Type 1 error = Pr(H0 = TRUE | H0 is rejected)

ie Falsely rejecting a true null hypothesis

44
Q

Using an Alpha criterion in NHST is done to control for which type of error?

A

Type 1 error (over the long f run)

45
Q

If the null hypothesis is false, is it possible to make a type 1 error?

A

No!

46
Q

A p value can be thought of as a measure of the consistency or compatibility of our sample data with the null-hypothesised population parameter.

A

Okay

47
Q

Does the p value tell us…

the observed effect is due to chance alone?

A

No!

The p value is calculated on the assumption that chance alone is operating, but does not indicate the probability of chance being the explanation.

48
Q

Tell me about how CIs relate to p values

A

A confidence interval calculated in a single sample defines the complete set of null hypothesised values that would not be rejected if used in a NHST test on the sample statistic.

49
Q

Tell me more about confidence intervals

A

It is in this sense that we can regard a confidence interval as containing a set of plausible values for the unknown population parameter value.

50
Q

And more and more about CIs

A

A confidence interval contains the same fundamental statistical information as a single NHST…but it just contains a lot more of the same type of information.

51
Q

But what else about CIs

A

A confidence interval provides us with a range of values for the unknown population parameter to which our data are compatible or consistent in the NHST sense.

52
Q

Does a confidence interval mean…

a 95% chance of capturing the true effect size

A
  • Any single confidence interval either captures the unknown population parameter value (i.e., the population size of effect), or it does not.
  • Over the long run, 95% of all confidence intervals calculated from independently- replicated samples will contain the true effect size.
53
Q

Does a confidence interval mean…

Any value outside the interval are population effect that are ruled
out

A

No!

  • Values not captured by the interval correspond to null-hypothesised values that would be rejected by a NHST.
  • But this does not mean these values can be definitely ruled out as population effect sizes.
  • Nor can it be said that values outside the interval have only a 5% chance of being true.
54
Q

Paul showed that when doing multiple NHST on multiple contrasts (ie NHST that are dependently related - don’t ask what that means), that the chance of a false rejection rate OVERALL was higher than the alpha criterion was not as bad as when the NHST were independent, but it was still pretty high.

What was the characteristic of the thingo that impacted how high that false rejection rate would be?

A

The mean squared error (MSE)

55
Q

What is the term for the possible inflation of the false rejection error rate in the case of multiple NHST?

A

‘Curse of multiplicity’

56
Q

What curse is the Bonferroni correction a response to?

A

‘Curse of multiplicity’

57
Q

What are the names of the two categories of alpha value in the Bonferroni correction?

A
  • ‘per comparions’ alpha value

- ‘family wise’ alpha value

58
Q

What is the ‘per comparison’ alpha value

A

The alpa value assign to EACH NHST

59
Q

What is the ‘family wise’ alpha value

A

The general, overall alpha for a clutch of NHSTs

60
Q

How would you calculate the ‘per comparison’ alpha value from the ‘family wise’ alpha value?

A

Divide the latter by the number of tests you’re going to make

61
Q

When doing group difference analysis, which form requires sphericity, multi-variate or univariate?

A

Univariate

62
Q

In the context of INDEPENDENT group difference, if the design is unbalanced should you check extra hard for homogeneity of variance?

And how would you do that?

A

Yup totes

Using your two main guys Fligner-Killeen and Levene

63
Q

When does sphericity even matter tho?

A

When

  1. doing within subjects designs, and
  2. there are 3 or more groups
  3. and the approach is univariate
64
Q

In a within subjects design, if you are taking a multivariate approach do you need to worry about sphericity?

A

Nope!

65
Q

If the assumption of sphericity is met, which is better, univariate or multivariate?

A

UNIvariate, comrades

66
Q

When you are looking at lots of confusing things, and ‘Pillai’ its among them, what should you look at?

A

Pillai!

But why?`