Stats Flashcards

1
Q

what are factorial designs

A

one dependent variable

two or more independent variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

an example of a two way factorial design

A
1 DV
2 IV
eg DV - time taken to get to work
IV - time of day
IV - mode of transport
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

an example of a three way factorial design

A
1 DV
3 IV
eg DV - proportion recognised
IV - diagnosis
IV - season
IV - stimuli
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

factorial designs

why are more complex designs with 3 or more factors unusual

A

complicated to interpret
require large n (between-subjects)
take too long per participant (within-subjects)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

when are factorial designs needed

A

more than one IV contributes to a DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what do factorial designs tell us

A

allows us to explore complicated relationships between IVs and DVs

  • main effects (how IVs individually affect the DV)
  • interactions (how IVs combine to affect the DV)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

interpreting factorial design results - main effects

A

most straight forward result
summaries the data at the level of individual IVs
marginal means
(try add picture from notes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

problem with main effects

A

can be misleading
main effects can give what we might assume to be the two optimal conditions but these two together may not be the optimal condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

interpreting factorial design results - interactions

A
we look at them in line charts
no interaction = parallel lines
interaction = crooked lines
special case = crossover interactions
         -the effect of the IV on the DB reverses dependent on the other IV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the three types of factorial anova

A

between-subjects factorial anova
within-subjects factorial anova
mixed factorial anova (covered in PS2002)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

assumptions in factorial anova

A

interval/ ratio (scale in SPSS)
normally distributed - examine with histogram
homogeneity of variance (for between-subjects) - eyeball SDs, Levene’s test
sphericity of covariance (for within-subjects) - mauchly’s test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what tests for homogeneity of variance

A

levene’s test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what tests for sphericity of covariance

A

mauchly’s test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what happens if assumptions are violated for factorial anova

A

they can withstand some violation

so proceed with caution and report what assumptions have been violated, along with corrected if possible anova results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

F-values

how many of these values can there be

A

on-way factorial anova = 1 F-value
two-way factorial anova = 3 F-values (main effect a, main effect b, interaction axb)
three-way factorial anova = 7 F-values (main effect a, b, c, interactions axb, axc, bxc, axbxc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how to report multiple F-values

A

F(between-groups df, within-groups or error df here)=F-value, p=probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

central tendency

A

s single score that represents the data - mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

dispersion / spread

A

a measure of validity in the data

s(tandard deviation)=sqrt(sum(x-mean)^2/(N-1))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

using means and standard deviations

A

we can compare a range of measurements using z (standard) scores
z = (score - mean)/SD
we can express how many SD units a point in the normal curve is from the mean using z scores`

20
Q

why do we use sampling

A

we cannot test everyone

we make assumptions about how our sample relates t the population based on what we know about sampling theory

21
Q

what is a population

A

every single possible observation

fortunately we know populations tend to be normally distributed

22
Q

explain the central limit theorem

A

if samples are representative of the popultaion
1 the distribution of all the sample means will approach a normal distribution
2 whilst individual sample means may deviate from the population mean, the mean of all sample means will equal the population mena
3 as the sample size increases, standard deviation of the sampling distribution decreases

23
Q

as a sample size increases……

A

we can say with more certainty what the population mean is

24
Q

standard error of the mean or standard error

A

SE=SD/sqrt(N)
represents the SD of the sampling distribution. this represents how confident we can be that our sample mean represents the population mean

25
Q

what are inferential statistics

A

stats that allow us to make an inference about pop from which our samples are drawn

26
Q

type 1 and 2 errors

A

boy who cried wolf
boy commits a type 1 followed by a type 2 error
(cried wolf when no wolf - there is an effect when in fact there is not)
(then no wolf cry when in fact the wolf is there)

27
Q

to use a t-test what are the parametric assumptions we make about the dta

A

intervak / ratio (scale in SPSS)
normal distribution - examine histogram
homogenity of variance - levene’s test

28
Q

3 types of t-tests

A

single-samples t-test (whether a sample is drawn from a population whose mean we know)
independent samples t-test (two sets of measurements are drawn from the same population in different groups
paired samples t-test (two sets of measurements are drawn from the same population before and after an intervention)

29
Q

t=…

the general idea

A

difference between means/ variability in means

30
Q

non parametric alternatives to single-samples t-test

A

one sample wilcoxon

signed ranks

31
Q

single sample t-test df

A

n-1

32
Q

non parametric alternatives to independent-samples t-test

A

mann-whitney u test

33
Q

independent samples t-test df

A

n1+n2-2

34
Q

non-parametric alternatives to paired samples t-test

A

wilcoxon signed ranks

35
Q

df for paied samples t-test

A

df=n-1

36
Q

how to report t-vales

A

t(df)=t-value, p=

37
Q

multiple comparisons problem

A

computationally inefficient

increases the overall probability of type 1 error = familywise error

38
Q

between groups variance

A

deviation of the group means from the grand, overall mean
the greater the group means differ from the grand mean, the bigger this will be
should equal treatment effect +measurement error in an ideal world

39
Q

with-groups (error) variance

A

deviation within each groups from each group mean
the more scores within each group vary from each other, the bigger this will be
should only equal measurement error in an ideal world

40
Q

F-value =….

A

between groups variance / within-groups variance

41
Q

F distribution depends on how many df

the degrees of freedom shapes the distribution

A

2
between groups df
within groups (error) df

42
Q

pairwise comparisons

A

significant anova results tell us there is a significant difference somwhere
to tell where we follow up a significant anova result
-how to follow up deoends on what we hypothesised about the pairwise difference
-planned = a priori comparisons
-unplanned = posthoc comparisons

43
Q

a priori comparisons

A

if you have specific hypothesis about expected difference amongst conditions
compare only these cases you have a secific hypothesis about
would use specific t-tests
but remember still subject to familywise error, can be corrected by bonferroni correction

44
Q

post hoc comparisons

A

if there are no hypotheses about expected difference amongst conditions then compare all cases using an accepted post hoc test - tukey’s HSD
all accepted posthoc tests control for familywise error

45
Q

Levene’s test - what to do with it

A

If the significance of the Levene’s statistic is greater than .05, the independent samples have equal
variances and you should use the t-statistic, df and p-value from the top row (equal variances
assumed) of the Independent Samples Test table.
If the significance of the Levene’s statistic is less than .05, the independent samples have unequal
variances and you should use the t-statistic, adjusted df and p-value from the bottom row (equal
variances not assumed) of the Independent Samples Test table.