Week 1: secondary variables, invalidity, reporting Flashcards
What is the aim of experimental designs?
To establish a causal link between a variable (manipulated by the experimenter) and some measure fo the participants response
What are some ways of controlling extraneous variables?
Elimination: removing it - sometimes not possible so…
Constancy: you can make it constant
Making it into an IV: deliberately collecting its data
Randomisation: smearing effects across groups
Statistical adjustment: more complex
What are some sources of invalidity in experimental design? (change in DV may not be due to IV)
Proactive history: subject variables that you can minimise though elimination etc
Retroactive history: happens during experiment
Repeated testing: learning or sensitisation from previous tests
Statistical regression: extreme scores moving closer to the mean if retesting. control groups to fix
Maturation: natural change
Loss of subjects: people drop out
Interaction effects: previous treatment effects subsequent
Error in DV measurement
Experimenter bias
Errors in statistical inference
What are ANOVA and t tests used for?
To compare the differences in means
What is the null hypothesis?
Equal means
Difference = 0
M1=M2 (=M3 in ANOVA)
How do you report t-tests?
Means & SDs
Sample sizes
t(df)=t, p=p
When there are only two groups, what is the relationship between t and f stats?
F = t(squared)
When you have more than 2 means and conduct an ANOVA, what do you do if it is significant?
Need to do post hoc tests to see where the differences are
How do you report an ANOVA
Means, SDs, sample sizes
F(df, df)= F, p=p
Explain more about the DV, groups, experiment etc