Summa Week 9 Flashcards

1
Q

When we want to explore whether the effects of different treatments influence the dependent measure, we can use tests of

A

t-test - two means, and one predictor and one independent variable
ANOVA - an extension of t-test
- compares several means
- can manipulate lots of IVs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If we want to compare several means why don’t we compare pairs of means with t-tests?

A

can’t look at several independent variables and inflates the Type I error rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is PC?

A

error rate per comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

PC is the prob of making a ______ error on a ____ comparison, assuming the null hypothesis is ____

A

Type I
single
true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If alpha = 0.05, there is a 5% chance that you are rejecting the null hypothesis _______

A

incorrectly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If we ran a bunch of t-tests of a = .05 then the per comparison error rate would be

A

.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

FW?

A

error rate familywise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

FW is the prob of _____ rejecting at least one null hypothesis in a family of c comparisons, assuming that each of the c null hypothesis is ____ in a set (or family) of comparisons

A

incorrectly

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Familywise alpha is

A

1-(1 - a’)^c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

FW where a’ is the

A

per comparison error rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

FW where c is the

A

number of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When we have k = 6 (k is the number of experimental conditions), we will have c =

A

c = 6*(6-1)/2 = 15 comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If we have a error rate per comparison of a’ = .05, then familywise alpha is

A

FW = 1 - (1-.05)^15 = .537

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The aim of ANOVA is to determine if treatment effect is present by comparing ______ and _______ _____

A

errors

treatment effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is error also known as?

A

random variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are random errors?

A

individual differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are measurement errors?

A

problems of accurately collecting data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is systematic variance?

A

treatment effect, the action of the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When the population means are equal, the differences among the group means will reflect the operation of _________ _____ alone (no ______ _______)

A

experimental error

treatment effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the theory of ANOVA?

A

SS total = SS treatment + SS error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

SS total =

A

total sum of squares = variability between scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

SS treatment =

A

model sum of squares - variability due to the experimental manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

SSerror =

A

residual sum of squares - variability due to individual differences in performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the F-ratio?

A

MStreatment/MSerror

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

If the model explains a lot more variability than it can’t explain then the experimental manipulation has had a _______ _______ on the outcome (DV)

A

significant effect on the outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is within-groups variability?

A

within-groups is intragroup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what is between-groups variability?

A

between-groups is intergroup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How many degrees of freedom impact the shape of the F distribution?

A

2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Is ANOVA two-tailed?

A

No, only positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Like a t-test, ANOVA tests the null hypothesis that the ______ are the same

A

means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the experimental hypothesis?

A

means differ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What type of test is ANOVA?

A

an Omnibus test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Omnibus tests are described as

A

test for overall difference between groups
group means are different
significant difference is not determined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Assumptions for 1-way ANOVA:

RANDOM SAMPLING

A

each sample is a random sample from its population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Assumptions for 1-way ANOVA: random sampling robustness?

A

considered inappropriate to conduct if violated, but some argue it is robust if violated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Assumptions for 1-way ANOVA: independence of cases

A

each case is NOT influences by other cases, and NOT robust to violations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Assumptions for 1-way ANOVA: normality

A

the DV is normally distributed in each pop, provided the sample size (N) is large and the n of the groups is EQUAL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is assumed for 1-way ANOVA to be robust?

A

sample size is large, and group sizes are equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Assumptions for 1-way ANOVA: HoV

A

the degree of variability in the pop are equivalent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Assumptions for 1-way ANOVA: HoV robustness

A

robust to violations if the sample size is large and groups are about equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the 2 types of means of interest in ANOVA?

A

marginal means (overall mean of each group) and the Grand Mean, or M..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is the sum of squares for SStotal

A

the difference of the individual scores and the marginal means (within/sserror) and the marginal means and the grand mean (between/sstreatment)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is df for sstreatment?

A

df = k-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What is df for error?

A

df = N - k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is df for sstotal?

A

df = N -1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is the formula for ANOVA?

A

F(dfbetween, dfwithin) = MStreatment/MSerror

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is MStreatment?

A

SStreatment / dftreatment = MStreatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is MSerror?

A

SSerror/dferror = MSerror

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is SS total?

A

SSbetween (sum of squares of the model) + SSwithin (sum of squares of individual differences)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

If F value is _____ than F Critical value, we can reject our ___ hypothesis and conclude that not all group/sample ________ are equal

A

greater
null
means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What is a condition of ANOVA?

A

we don’t know where exactly the differences lie in calculations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

How do we determine where differences in means occur?

A

using post-hoc tests

53
Q

What is on ZW’s favourite slide?

A

SStotal (total variance in the data) = SStreatment (variance due to the treatment) + SSerror (errors in model)

54
Q

what is the calculation for SStotal?

A
sum of (marginal mean for group 1 - grand mean)^2, or....
sgrand^2 x (N-1)

e.g. M… = 3.467
marginal means are 2.20, 3.20, and 5.00
sgrand^2 = 3.124
n = 5, k = 3, therefore N = n x k = 15

so SStotal = 3.124(15-1) = 43.74

55
Q

What is the calculation for SS total?

A

sgrand^2 x (dftotal)

56
Q

What is the calculation for SStreatment?

A

sum of (total partic # group 1 x (marginal mean group 1 - grand mean)^2 + (total particip group 2 x (marginal mean group 2 - grand mean)^2 + (total particip group 3 x (marginal mean group 3 = grand mean)^2
= 5(2.2 - 3.467)^2 + 5(-0.267)^2 + 5(1.533)^2
= 8.025 + 0.355 + 11.755
= 20.135

57
Q

What is the calculation for SSerror?

A

= sum of (stat group 1 - marginal mean group 1)^2
= sum of (standard variance group 1 x (n - 1)

e.g. M… = 3.467
marginal means are 2.20, 3.20, and 5.00
sgrand^2 = 3.124
n = 5, k = 3, therefore N = n x k = 15

so…
=sgroup1^2 (n1-1) +sgroup2^2…
= 1.70(5-1) + 1.70(5-1) + 2.50(5-1)
23.60

58
Q

1-way ANOVA dftotal =

A

N - 1

59
Q

dftreatment 1-way anova is

A

k - 1

60
Q

dferror 1-way anova is

A

N-k

61
Q

SStotal =

A

sstreatment + ss error

62
Q

dftotal =

A

dftreatment + dferror

63
Q

MStreatment =

A

SStreatment/dftreatment

64
Q

MSerror =

A

SSerror/dferror

65
Q

F(dfbetween,dfwithin) =

A

MStreatment/MSerror

66
Q

If F value is ______ than F critical value, we can _____ our null hypothesis and clucde that…

A

great
reject
not all group/sample means are equal

67
Q

What is the downside to 1-way anova?

A

we don’t know where the differences lie, therefore we use post-hoc tests

68
Q

Significant t and F ratios show that there is ____ ____ of the treatment, a real _____ between the groups that _____ be explained by chance

A

a real effect
difference
cannot

69
Q

effect size measures how

A

big the effect of the treatment is

70
Q

a significant effect depends on

A

size of the mean differences
size of the error variance
degrees of freedom

71
Q

How to determine raw effect size (looking)

A

just looking at the raw difference between groups

72
Q

how to determine raw effect size (depending)

A

can be illustrated as the largest group difference or smallest

73
Q

how to determine raw effect size (comparisons)

A

CANNOT be compared across samples of experiments

74
Q

Standardized effect size …

A

expresses raw mean differences in standard deviation units

75
Q

Another name for standardized effect size is

A

Cohen’s d

76
Q

What is a small, medium and large effect for Cohen’s d, respectively?

A

.2, .5, .8

77
Q

What is eta-squared?

A

an OVERESTIMATION of the degree of overlap in the population

n^2 = SStotal -SSerror/total, or…
= SStreatment/SStotal

78
Q

What is omega–squared?

A

a better estimate of the percent of overlap in the population than eta squared, it corrects for the size of error and the number of groups

79
Q

What is the formula for omega squared?

A

oo^2 = (SStreatment - (k-1)MSerror) / (SStotal + MSerror)

80
Q

n^2 is a…

A

sample estimate of the proportion of the variance in the DV that is accounted for by the IV

81
Q

What do you use for POPULATIONestimates of effect size?

A

oo^2, or omega squared

82
Q

What is partial eta-squared?

A

the proportion of the total variability attributable to a given factor

83
Q

npartial^2 formula

A

SStreatment / SStreatment + SSerror

84
Q

partial omega squared?

A

oo^2 = SStreatment - (k-1)MSerror / SStreatment + (N - (k-1))MSerror

85
Q

What is effect size for correlation?

A

r, or Pearson’s correlation

86
Q

What is the small, medium and large effect size for correlation?

A

r, which is .1, .3 and .5

87
Q

What is effect size for ANOVA (first of 2)?

A

eta squared

88
Q

What is the small, medium, and large effect size for ANOVA (first of 2)?

A

n^2, 0.01, 0.06 and 0.14

89
Q

What is effect size for ANOVA (second of 2)?

A

omega squared

90
Q

What is the small, medium and large effect size for ANOVA (second of 2)?

A

omega squared, 0.01, 0.06, and 0.14

91
Q

What is effect size for t-tests?

A

Cohen’s d

92
Q

What is the small, medium, and large effect sizes for t-tests?

A

Cohen’s d 0.2, 0.5, 0.8

93
Q

What is effect size for 2 x 2 tables?

A

odds ratios

94
Q

What is the small, medium, and large effect sizes for 2 x 2 tables?

A

odds ratios, 1.5, 3.5, 9.0

95
Q

What are Welch statistics?

A

when the Levene’s F test reveals HOV assumption is NOT met (i.e. p <= .05), then the Welch’s F test should be used

96
Q

How do you get Welch’s F in SPSS?

A

Analysis - Compare Means - 1-way anova - option, and use the F under statistic a, as well as the new df and significance score. ALWAYS CREATE WHOLE NUMBERS FOR DFS

97
Q

There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does no relationship look like?

A

DV and A circles are completely disjointed

98
Q

There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does a small reltaionship look like?

A

DV and A circles are just barely touching

99
Q

There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does a moderate relationship look like?

A

DV and A circles are touching about 1/4 of their surface area each

100
Q

There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does a strong relationship look like?

A

DV and A circles are touching more than 1/2 of their surface areas

101
Q

wHAT IS THE proportion of variance accounted for by the regression model?

A

R^2

102
Q

Multiple R^2 is equal to

A

eta-squared

103
Q

Adjusted R^2 is equal to

A

omega-squared

104
Q

What is the formula for R^2?

A

SStreatment/SStotal (omega-squared)

105
Q

Why would you select levels of the IV that are very different?

A

to increase the effect size and make the study more powerful

106
Q

What can be more liberal to create a more pwoerful study?

A

alpha level

107
Q

What can you reduce for designing a powerful study?

A

reducing error variability

108
Q

What would you compute for the necessary amount for adequate power when designing powerful studies?

A

the sample size

109
Q

How do you access effect size for 1-way ANOVA studies?

A

Analyze - Univariate, define DV and define IV and check for fixed or random models then click options

110
Q

What do you need to ensure before preceding with the F value data?

A

that Levene’s test of equality of error variance is insignificant, meaning that the variance between groups are homogeneous and that we can assume that the test for the DV is equal across groups

111
Q

Why is ANOVA an omnibus test?

A

it tests for overall difference between groups, tells us that group means are different, yet sadly does not say where exactly the significant difference lies

112
Q

What’s the deal with post-hoc tests?

A

they are done after ANOVA doing pairwise comparisons to control FW

113
Q

When are post-hoc tests appropriate?

A

only when you are doing exploratory research (a.k.a fishing for significance)

114
Q

How many post-hoc tests are there for our interest?

A

5

115
Q

What is the Bonferroni method?

A

a type of posthoc test that minimizes the familywise alpha

116
Q

What are the 5 posthoc tests to discern where the difference in the means lie in an ANOVA?

A

1) Bonferroni
2) Tukey’s HSD
3) Dunnett’s C
4) Scheffe’s test
5) Fisher’s LSD procedure

117
Q

What are priori comparisons?

A

planned comparisons before the data are collected usually with an idea of what to expect

118
Q

What do planned comparisons almost never involve?

A

very many of the possible comparisons. It is a really bad idea to do pairwise t-tests among all pairs of means

119
Q

If the comparisons are planned, then you test them without

A

any correction.
Each F-test for the coparison is treated like any other F-test. You look up an F-critical value in a table with dfCOMP AND DF ERROR

120
Q

How do you do a priori comparisons with correction?

A

a t-test by using MSerror and tcritical at dferror

121
Q

When do you use Bonferroni t-test (Dunn’s test)?

A

with correction and a t-test by using MSerror and tcritical at dferror

122
Q

Which a priori comparisons require equal sample sizes?

A

Bonferroni and Dunn’s tests

123
Q

What is trend analysis?

A

if you have ordered groups, you often will want to know whether there is a consistent trend across the ordered group (e.g., linear trend)

124
Q

How do you tell whether there is a conssitent trend across the ordered group?

A

a linear trend

125
Q

When does trend analysis come in handy?

A

there are orthogonal weights already worked out depending on the number of groups

126
Q

What does trend analysis depend on for the number of groups?

A

orthogonal weights are already worked out

127
Q

When is trend analysis best done/

A

as a PLANNED comparisons, although can be done posthoc

128
Q

Reporting ANOVA in APA format:

In this study, a random sample of states from each of the census reigions were taken and the average salary in each clinical counseling, and school psychologists was record mean salary in the Northeast region was $77,730 (SD = $1,030), $63,550 (SD = $930) in the Midwest, $61,370 (SD = $1039) in the South and $68,830 (SD = $870) in the West. The overall effect of analysis of variance showed a statistically significant effect in the region, F(3, 120) = 3.52, p = .049, oo^2 = 0.32.

A

Howeverpost-hoc testing showed that the onlly statistically significant difference that existed was between the Northeast and the South – the salaries for clinical, counseling, and school psychologists are statistically higher in the Northeast than in the South (Tukey’s HSD, p < .05). The results of this study suggest that it might be worthwhile for a psychologist living in the South to move to the North, but there would be noadvantage to moving to the West or Midwest. If this study is replicated, it would be advisable to take into account the cost of living as higher salaries in a region may be mitigated by a higher cost of living.