week 4 Simple ANOVA Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Power

A

The probability of making a type 2 error (failing to reject the null hypothesis when it is in fact false) is called beta. NOT the same as beta weight etc. The Power is the sensitivity of an experiment. The Power is the ability to detect a significant effect, in our data.

Power = 1-beta.

A powerful/sensitive experiment is one with a low probability of making a type 2 error.

Normally want Power to =0.8.

Factors which will increase Power:

  1. increase alpha level (eg from 0.05 to 0.1).NOT advisable in practice.
  2. Increasing number of subjects per condition (Anova) or overall (regression).
  3. A large effect size.
  4. Controlling extraneous influences (background noise).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Anova assumptions

A

The assumptions of Anova may not always be met, but the more the assumptions are met, the more confidence we have that the analysis can be run and interpreted as intended.

  1. Assumes variances of each group are roughly the same. The largest variance from any group SHOULD NOT be greater than 4 x the smallest variance from any group. So eg Anova can cope with the largest variance in a group being 400 and the smallest variance in a group being 100.
  2. Assumes continuous variables are normally distributed
  3. All scores are independent of all others.

Also note that in Anova, the predictor values are categorical, whereas in multiple regression, they may be continuous or categorical or mixture.

Anova is used when we have one DV and one IV with more than 2 levels. If we had one DV and an IV with 2 levels, we would run an independent samples t-test.

Anova asks if there is a significant difference in group means, but does NOT tell us WHERE the difference is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

basic model

A

Is SStotal which compares observed DV scores against the Grand Mean. All the variance around the Grand mean.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

best model

A

is SSM which compares group mean against the Grand mean.

SSmodel=SSbetween

Note that calculating SSmodel is slightly different if have unequal group sizes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

residuals

A

Is SSresiduals which compares participant’s dv score to the group mean. All the variance within the group, around the group mean.

SSresiduals=SSwithin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

independence of errors

A

If observations are truly independent (as is desired), then each possible error is also independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

homogeneity of variance

A

Levene’s test us is to test the assumption that different groups are of homogenous variance. (one of the assumptions of ANOVA.)

If the results of Levene’s (also denoted by F) has an accompanying p value and p < 0.05 then it is concluded that the assumption of homogenity has been violated. Some might then recommend using different statistics to anova.

Occasionally Levene’s is too sensitive, so can run a Hartley’s Fmax ratio to see if really have violated.(if largest variance not 4x as large as smallest variance, is still ok). If conclude variance has been violated, can correct F statistic with either Welch Correction (corrects degrees of freedom calculated for each group;use this if have uneven group sizes) or Brown-Forsythe Correction (which recalculates SS by doing around median as opposed to mean).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

non parametric test

A

a test which has no assumptions about data distribution (ie no assumption of normal distribution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

magnitude of experimental effect

A

a very large F value, does NOT necessarily mean a very big result.

With a large enough sample size, even the most trivial effects will show a very large f value.

The magnitude of the difference between 2 groups, in standard deviation units, is known as the effect size, and this tells us how big (the magnitude) of the effect. the larger the effect size, the greater the magnitude of the finding.

Cohen’s d, partisl eta squared and omega squared all measure the effect size.

Bear in mind that any effect size also needs to be interpreted in relation to the previous researchin any particular topic area. The interpretation of significance of “large” and “small” effect size can vary widely with specific situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Monte Carlo studies

A

Computer-generated studies designed to violate Anova’s assumptions to some degree and thus test the robustness of Anova.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

violations of assumptions

A

In actuality, violations of the Anova assumptions are fairly common.

Moderate violations of skew (skew is assessment of if left and right balanced=non skew)can be handled by Anova but rate of type 1 errors is affected by kurtosis.(kurtosis = level of heavy or light-tailed around normal distribution)

unequal group sizes causes problems, particularly as the discrepancy increases.

Type 1 errors also increase if there is non independence of scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Welch procedure

A

An adjustment made when homogeneity of variance is violated.aims to reduce type 1 error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

high variability

A

in high variability situations, even though groups have a different mean, as there may be high overlap, it is hard to tell with any score, which group it most likely belongs to. In this situation, the mean is not a great indicator of the group. Therefore, knowing the mean difference is not enough, we also need to know how indicative the mean is for the group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Kruskal-Wallis non parametric test

A

a test used to rank data irrespective of group, then tests if ranks differ with group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Calculations

A

SS=Sum of squares=Σ(X-X-)2

For Anova, we have SStreatment

SSerror and

SStotal .

SStreatment=nΣ(X-j -X-…)2

where n=group sample size,

X-j=group mean

and X-…=grand mean.

SSerror=Σ(Xij-X-j)2

where Xij=score of person i in group j

and X-jis the mean of group j.

SStotal=SStreatment + SSerror

Then if we divide an SS by it’s respective degrees of freedom, we get Mean Squares (MS).MSresidual is an estimated measure of the population variance. MSmodel is an estimation of the population variance when the null hypothesis is true.

SS/df=MS

dftotal=N-1=dftreatment + dferror

dftreatment=k-1 where k=number of treatments

dferror=k(n-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

F distribution

A

a well known distribution. Shape of distribution varies with both number of subjects tested and also number of groups tested (hence 2 parts to the degrees of freedom).

17
Q

Hypothesis testing of F ratio

A

Use the Fcritical tables.

the degrees of freedom for treatment are the numerator degrees of freedom

and the error degrees of freedom are the denominator degrees of freedom.

If F(dftreatment,dferror) >Fcritical, then we reject the null hypothesis. Or if p<0.05.ie is significant

If the null hypothesis is true, then the expected value of F is 1 or less.

If the null hypothesis is false, the expected value of F is greater than 1.

18
Q

Cohen’s d

A

Primarily used to compare two means.

d=.2=small effect size.

d=.5=medium effect size.

d=.8 =large effect size.

(Cohen’s d can go to infinity)

19
Q

Eta squared

A

To evaluate the effect size of the Anova finding, we use partial Eta squared (n2). This can be interpreted as a squared correlation coefficient. eg if n2=0.23, then 23% of variation in dependent variable can be attributed to differences in the variable which was manipulated across different groups.

In general. n2=.02=small effect

n2=.13=medium effect

n2=.26=large effect.

Eta squared ranges from 0 to 1.

20
Q

omega squared

A

ω2 = (SSeffect – (dfeffect)(MSerror)) / MSerror + SStotal

range of omega squared is -1 to 1.

interpretaion of size as per partial eta squared.