Group difference Flashcards
What are the two types of group when measuring group difference?
- Independent (mutually exclusive)
2. Dependent (mutually paired)
What are examples of mutually paired (dependent) groups?
- Same person being measured twice
2. two people bound in some way (husband-wife)
Dependent groups can be different sizes, T/F
FALSE!
Think about it - same person getting measured twice etc
Independent groups can be difference sizes, T/F
TRUE!
But if they are different sizes it means the design is imbalanced.
In independent group design, can a participant belong to more than one group?
No!
What are the relevant assumptions when investigating mean differences between two INDEPENDENT groups (3)
- Observations are independent
- Observed scores are normally distributed
- Variances in the two groups are the same (homogeneity of variance assumption)
With INDEPENDENT groups, there is a circumstance in which it doesn’t matter so much if assumption #3 (homogeneity of variance) is violated… what is that circumstance?
When the design is BALANCED
How do you test homogeneity of variance assumption?
There are two ways, you know em
- Levene’s test
2. Fligner-Killeen’s test
You’re doing independent or dependent group difference…
your observed scores are not normally distributed…
do you use standardised or unstandardised CIs…?
Unstandardised!
Ironically
These are robust against mild-to-moderate non-normality
So you want to standardise your group differences…
there are two ways to do this, what are they?
Bonett’s squiggle
Hedge’s g
Of the two ways to standardise group difference - Hedge’s g and Bonett’s squiggle - and both of them require the observations to be normal.
But one of them also needs the variance to be homogenous. Which one needs the homogeneity… of the variance…? WHICH ONE?!
Hedge’s g!
When testing for homogeneity of variance using Fligner-Killeen and Levene’s, what are you actually looking for, actually…?
p values
And you want em to be big
You’re looking at the Fligner-Killeen test result and it says p = .25… what does it mean?!
It means your variance is homogenous and you can finally relax
You’re looking at the following results:
Levene’s: p = .04
Fligner Killeen: p = .18
What should you do?
Its inconclusive so you should be conservative and assume HETEROgeneity of variance
When doing DEPENDENT groups, what is the name of the score you care about
The Difference Score
What are the relevant assumptions the investigating means differences between two DEPENDENT groups (2)
- Observations are independent
2. Observed scores are normally distributed
Why don’t you need to worry about homogeneity of variance when doing DEPENDENT group comparisons?
Nobody knows
When applying contrast weights, is it important to consider which weight to make positive and which to make negative, or is this decision entirely arbitrary?
It’s arbitrary…
When using contrast weights and comparing group difference, what does can you claim when your design is balanced and your contrast weights are orthogonal?
You can claim that…
- the mean differences do not overlap and
- do not contain redundancies
When looking at OBSERVED mean difference scores for two independent groups, what’s the ACTUAL rule for when to look at the EQUAL vs UNEQUAL variance output from R?
Counterintuitively, you the thing you actually have to look at is the normality of the distribution.
When the distribution is normal (or at least, moderately normal), then you should read the ‘EQUAL’ output…
EXCEPT when the design is UNBALANCED and the variance is UNEQUAL
When looking at STANDARDISED mean difference scores for two independent groups, what are the ACTUAL rules for when to look at Hedge’s G and Bonnett’s d, and when to just give up entirely?
- If the distribution is non-normal then just give up (your CIs won’t be robust)
Assuming your distribution is normal… then
- If variances are equal, go for Hedge’s d
- If variances are unequal, go for Bonnett’s d
In R, what does the eff.ci() function give you?
Standardised mean differences for a two group (independent?) one way design… using contrast weights
In R, when looking at the output of a eff.ci() function (for a two group one way design), what is the ‘observed mean contrast’
This is the OBSERVED difference between the two groups you made up using contrast weights
What are two other terms for describing a ‘dependent groups’ design?
- a ‘within subjects’ design
2. a ‘repeated measures’ design
What are two other terms fo describing a ‘within subjects’ design?
- a ‘dependent groups’ design
2. a ‘repeated measures’ design
What are two other terms of describing a ‘repeated measures’ design?
- a ‘within subjects’ design
2. a ‘dependent groups’ design