Week 6 Flashcards
Explaining variance
Scores in experiments differ on:
• Independent variable (IV): what I can control/measure
• Unmeasured variables – can’t control/measure
Differences in outcomes/scores:
• Dependent variable (DV): measure that tests effect of the IV
Aim: Explain variation in DV using variation in IV
Main designs
Repeated Measures/ Within subjects
- Comparing one group of people across time or under different conditions
Independent Groups/ Between Subjects
- Comparing groups of different people
Repeated measures design
•Measure the same group under different
conditions
• Each person acts as their own control
• Only source of difference in scores is the IV
- Focuses on difference score -> how much does each individual change?
Significant V Meaningful
- Knowing the difference is unlikely to occur just from sampling error is not enough
- Is it something worth knowing/worrying about?
- Reported using Effect Size
Effect Size
• The t-test will tell you if you have a statistically
significant difference
• It does not necessarily tell you how large or important the
difference is
• Complement your interpretation and reporting of a repeated measures t-test with a measure of the effect size
• Cohen’s d - Measures the difference between means in standard deviation units
Cohen’s d
Interpretation of Cohen’s d
d = .20 indicates a small effect
d = .50 indicates a medium effect
d = .80 indicates a large effect
Remember, these numbers mean something
• The difference in standard deviation units
• These also differ from effect size conventions for correlations
Cohen’s d formula
d = D / s(D)
• This Cohen’s d formula is for repeated measures designs
• It can be negative (but like t test the sign could be ignored)
• It can be greater than 1 (or less than -1)
• You can get the numbers to enter into the formula from the SPSS t-test table
- where mean difference is divided by standard deviation
Quasi-experiment
- It is an experiment, but with no manipulation
- For example, we may want to know differences between gender, age group, or occupation. If we have no control over group membership
True experiment
- bring in participants, and randomly assign them to conditions
- experiment vs a control group where the researcher controls group membership, they separate them into groups
- For example, some participants would receive a drug - others would receive a placebo.
Independent groups design
Small difference between groups
• Could be sampling error & no effect of IV
• Can’t say it’s the IV, could be random chance
• i.e., different people of slightly better ability in group 2
Independent groups t-test
- Because the two samples are independent, the calculations are different from the repeated measures t-test (although conceptually, still trying to find differences between two sets of data/means)
- It combines the variation in the two sets of scores to estimate standard error
- The t-value is simply the number of standard errors that the two means are apart by
Assumptions
Assumptions
Most inferential statistical techniques require certain assumptions to be met
• If assumptions are met result is a valid inference
• If not the inference might not be statistically valid
If assumptions are not met your conclusion:
• might not be a fair summary of the sampled behaviour
• might not be true in population
Independent groups testing assumptions
- Normality
* Homogeneity of variance
Normality assump.
- Tests if population is normal (needed if n is small)
* Allows mean and SD to be used as valid estimates of centre & spread
Homogeneity of variance assump.
- Homo = same, Genus = type/kind
- Same kind of (or, similar) variance in both groups
- Matters most when groups are very different sizes