Exam 2 Flashcards

1
Q

What can ANOVAs compare that t-tests cannot?

A

2 or more treatments or groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is alpha defined?

A

It is the probability of making a type 1 error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why don’t we just do multiple tests?

What adjusts for this?

A

Because multiple tests create inflation and ANOVAs adjusts for this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

t-tests are to looking at mean differences as ANOVAs look at ___________.

What does ANOVAs determine?

A

ANOVAs look at the amount of overlap in variability and compare it to what is unique in the variability of each group. It determines if the groups are different from each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If overlaps are far enough from one another in an ANOVA, what does this mean?

A

There is a difference between the variance in the sample means.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define Factor

A

It is the IV or the quasi-IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is quasi-independent?

A

Factors that are pre-existing like gender, ethnicity; etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Levels?

A

Levels is the individual conditions or values that make up factor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 3 ways to apply the ANOVA to different research designs??

A
  • Independent measures design
  • Multiple Comparisons: Repeated measures design
  • Factorial ANOVA: Studies that involve more than 1 factor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ANOVA Logic: What are we measuring in Total Variability?

A

Combine all scores into one general measure of variability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ANOVA LOGIC: What question are we answering in Between-treatment Variability?

A

-How much diff. exists between the treatment conditions and if it is bigger than what we expect by sampling error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ANOVA LOGIC: What question are we measuring in Within Variability?

A

-How much difference to expect from random and unsystematic factors- or naturally occurring differences that exist with no treatment effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the 2 types of Variability?
1. Systematic Treatment Differences

  1. Random, Unsystematic Differences
A

• Systematic Treatment Differences: difference
between the sample learning performance
means is caused by the difference room
temperatures (between)

• Random, Unsystematic Differences:
differences that exist even if there is no
treatment effect (within0
– Individual differences
– Experimental error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Write out formula for the following ANOVA notations:

i
j
k
n
N
A

• i = individual
• j = treatment condition
• k = number of treatment conditions
• n = number of scores in each treatment condition or Group size
• N = total number of scores in the entire study
– N = k(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In the ANOVA Structural model, each score can be broken in to 3 components.

Write it out.

A

The Structural Model
• Each score can be broken into three components
Score = grand mean + condition component + uniqueness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Identify the IV and DV:

Recall of verbal material as a function of level
of processing

A

– IV: Level of Processing (Counting, Rhyming,
Adjective, Imagery, Intentional)

– DV: Words Recalled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the end product of an ANOVA?

A

F-ratio

F = variance between/variance within

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Each variance in the F-ratio is computed as:

A

ss/df

variance between treatments = ssbetween/dfbetween

variance within treatments - sswithin/dfwithin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Define “orthogonal”

A

Independent from one another and unrelated.

Variance amount attributed to group 1 is difference from group 2 and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Define covariate

A

What’s the grouping variable after something is controlled for, such as someone’s major being accounted for.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Why don’t we just do multiple t-tests instead of ANOVAS?

A

To adjust for type 1 error inflation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

T or F: ANOVA is just T-test squared.

Why or why not?

A

True. If the ANOVA contains only 2 groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What can we conclude about the distance between variances?

A

If variances are far apart from each other, we can conclude there are significant differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

If we are looking at the effect of room temperature on learning, what is the factor, levels, and DV?

A

The factor or IV is room temperature.

IV Conditions and the 3 diff. levels:

1) mu of 90 degrees
2) mu of 80 degrees
3) mu of 50 degrees

DV is learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

If we reject the null hypothesis in an ANOVA what does this mean?

A

The treatment had an effect on the DV. At least 1 of the population is diff. from the other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is k?

A

Number of groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is x bar dot dot?

x..

A

mean of means or the Grand mean

= overall individuals for overall groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What does it mean when the variances between the groups are difference but the MEANS are roughly the same?

Set 1: m1 -m3 = 20,30,35

Set 2: m1-m3= 28,30,31

ex) variance in g1 is 58.33 and g2 is 2.33.

mean 1 is 28.33 and mean 2 is 29.67

A

There is an overlap in set 2.

The range of set 1 is 15, range of set 2 is 3.

The first set of means is more variable than the 2nd set of means.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

When looking at the F table, the top column is the ___________ and the left row is ________.

A

Top is numerator

Left is denominaor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the formula for post-hoc FWR? What does this formula show us?

A

1 - (1-alpha) subscript k-1

It is the error rate which shows how much the alpha has been inflated, instead of the standard .05.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the formula for post-hoc Bonferroni?

A

alpha/comparisons (all comparisons that can be made, refer to the star preston draws)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

SStreatment is synonymous with

A

SSbetween

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is the formula for effect size, eta squared?

A

sstreatment (between)/ sstotal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is MSbetween?

A

SSbetween/df between

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is MSwithin?

A

SSwithin/df within

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How is the F calculated?

A

MSbetween/MSwithin

+SSbw, SSw, SSt
+DFbw, DFw, DFt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Conceptual: What are we trying to find when we calculate MSbetween (mean squared between)?

A

We are looking at the differences in the variability between group means.

Answering how much difference exists between each treatment condition.

If the difference we calculate is bigger than what would be expected due to just sampling error (the within treatment in ANOVA).

We compare the Group mean to the Grand mean.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Conceptual: What are we trying to find when we calculate MSwithin?

A

We are looking at the variability within each group.

We compare each individual mean (xij) to their own group. mean.

How much difference there is, and if it’s reasonable due to unsystematic differences or individual differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is multiple comparison tests?

What are the 2 types?

A

Multiple comparisons tests are additional hypothesis tests done after an ANOVA to see exactly where the differences are.

  1. A priori - planned, test based on specific hypothesis (mu 1 = mu 2 but not equal to mu 3 = mu 4 = mu 5).
  2. Post-hoc - unplanned. All possible mean comparisons to hope the significant difference is found somewhere.

It indicates that all population means are not equal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

How many comparisons can be made between 2,3,4,5,6,7, 8 groups?

What is the pattern?

A
2 groups = 1 comparison
3 groups = 3 comparisons
4 groups = 6 comparisons
5 groups = 10 comparisons
6 groups = 15 comparisons
7 groups = 21 comparisons
8 groups = 28 comparisons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What does orthogonal mean?

A

It means independent. Equal group sizes, all groups are orthogonal and no overlap in variance

Our sample sizes are equal, therefore the groups are independent, so there are no overlaps in variance.

This means that any difference is due to the treatment and not the sample sizes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is the idea of a confidence interval?

A

“confidence intervals is to construct a range of values within which we think the population value falls.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What type of tests do we run for an a priori test?

A

In an a priori, we run multiple independent t-tests to compare each groups since we are only comparing 2 groups to each other.

If we have a directional or a specific hypothesis before the ANOVA, we would run an a priori to run the tests based on our original hypotheses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What information do we need in order to solve for MS?

A

We need to compute the Sum of squares AND the Degrees of Freedom.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What does the probability (p value) in an F test tell us?

A

It is the probability that the F value will be greater than the critical value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is a covariate?

A

It is a variable that we don’t look for but COntrol for.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

How is ANOVA Structural model is related to regression?

A

They are identical in computation (3 components: grand mean, uniqueness, and treatment) in examining an assumption but organized differently.

Regression will just use dummy coding instead of ss, within variability for the diff. treatment conditions.

Computations: If we restructure what we’re doing into 3 components (treatment, uniqueness, grand mean), it replicates the individual score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What do we analyze with an ANOVA?

*hint: overlap between…

A

Overlap between the groups and it’s uniqueness will tell us if the groups are different from one another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

xij

A

each individual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

x.j

A

each group mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Interpret post-hoc in R:

What is ‘a’
What is ‘b’
What is ‘ab’

A

a = not different from each other

b= not diff. from each other but diff. from others

ab = not diff. from a or b, it is equal to both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Know how to “tree-out” and what that contrast means.

A

It is an orthogonal contrast…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

When would we use the FWR post-hoc test?

What do we do after the FWR?

A

The amount that the type 1 error is inflated is the FWR.

If there is an inflation of the alpha, then we would continue to the Bonferroni test.

54
Q

When would we use the Bonferroni post-hoc test?

A

We use the Bonferroni once the FWR tells us there is an inflation of the alpha.

Bonferroni adjusts the alpha level by controlling the error rate for both orthogonal (post-hoc comparing all comparisons being made) and a prior (dividing alpha by the comparisons we predicted by the hypothesis). It uses real comparison values and gives a more conservative alpha level to compare to.

55
Q

When do we use Fisher’s LSD?

A

When we want to desperately see a significance, because it’s the least significance.

All possible pairwise t-test and does not control for type 1 error inflation.

56
Q

What is Tukey’s HSD test?

A

Controls for Type 1 error inflation when all assumptions are met.

Only when all group sizes are equal.

Compares all possible pairs of means using q.

57
Q

All group sizes are equal- which post-hoc do we use?

A

Tukey’s HSD

58
Q

What is bonferroni tests computing for?

A

To control error rates of a priori levels.

59
Q

When do we use Scheffe tests?

A

It’s the most conservative and least powerful, it can be used with unequal groups.

It produces a large critical value. The results are quite close to bonferroni.

60
Q

What is power?

A

Probability that the test will reject the null hypothesis when the treatment has an effect.

It’s the likelihood of rejecting the null when there is an effect.

61
Q

What are the 2 requirements of Orthogonal Contrasts?

A
  1. Across all rows must equal zero

2. Sum of cross multiplication must equal zero

62
Q

What do we add for a factorial ANOVA? What is it also called?

A

We add another IV. It’s called a 2-way ANOVA.

63
Q

We believe that temperature and humidity both affect learning. What type of test would we conduct?

A

A factorial ANOVA

64
Q

What are the advantages of a Factorial ANOVA?

A

Generalization, Interaction, and Economical.

  1. Allows greater GENERALIZATION to the population.
  2. Allows INTERACTION between the IVs. We can see if 1 depends on another.
    ex) # of recall DEPENDS on whether you’re old or young.
  3. Less time, more ECONOMICAL because we use less participants than conducting 2 1 way ANOVA design.
65
Q

2 x 5 ANOVA notation.

A

2 Factors with 2 levels by 5 levels.

66
Q

What is the Factorial ANOVA structural model?

A

It includes all combinations of the levels of the IV.

grand mean + factor1 + factor 2 + interaction + uniqueness = All possible observations

67
Q

What are the different effects of Factorial ANOVA?

A

Each factor might produce a main effect.

There might be an interaction effect.

68
Q

When we tease apart the unsystematic differences from the systematic differences, how are we doing this?

What is the formula for this concept?

A

We know that the between treatment is both a systematic and unsystematic differences (individual AND real diff).

We take out the unsystematic stuff by dividing the information by the unsystematic/within variability.

Variance between/variance within
ex) 16+(4)/ (4) = F =5

69
Q

If something is caused by a treatment, what type of difference is this?

A

A systematic, or between treatment difference.

70
Q

If there are differences seen regardless of a treatment effect, what type of difference is this?

Why would we see differences like this?

A

An unsystematic or within treatment difference which is not due to the effect of the IV.

-Could be due to an individual or experimental error. It’s happening JUST in one group.

71
Q

A larger F value can indicate what?

A

That there is either a larger systematic difference, or a bigger effect of the treatment on the DV, or the variability of the between is bigger than the within.

72
Q

F = 1. What does this mean?

A

There is no treatment effect. There was nothing unsystematic, we fail to reject.

We don’t even look at the critical value. The null is true.

Random/random = nothing has changed

73
Q

What are the 2 ways to increase the F-ratio?

A
  1. Increase the differences that treatment has on the DV

2. Add more control to reduce or eliminate the random, unsystematic differences

74
Q

In the ANOVA structural model, how many components do we break each score down into? What are they?

A

3 components:

  1. Grand mean
  2. Treatment (condition)*aka Between
  3. Uniqueness (ind. differences)*aka Within
75
Q

On an ANOVA sparse table, how would we compute SS
with a MS and the DF?

*try it

A

We would multiply the MS and the DF.

76
Q

On an ANOVA sparse table, how would we compute the between value form just the F value and MS within?

A

Multiply them to get the between value.

77
Q

Why can we never have negative F values? What does this mean for graphs?

A

Because we have squared stuff.

We will never have a negative graph. The zero will always be at the low end.

78
Q

How would we know about the between variability and the within variability from looking at a graph if an F value is 1?

A

There will be a large overlap, making them essentially the same.

79
Q

An F value smaller than 1 means what about the between and within variability?

A

The msbetween (systematic) is smaller than mswithin (unsystematic) variability

80
Q

What does it mean when we have a significant F-value?

A

It means the difference is due to more than just sampling error.

81
Q

Define sampling distribution.

A

It is the difference between our sample mean and the parameter.

82
Q

Define effect size. What is the formula?

A

The proportion (ratio) of variability that can be attributed to the treatment effect.

The percent of DV that can be accounted for by the IV.

We divide SSbetween over SStotal

83
Q

How do we compute a homogeneity of assumptions on R?

How do we calculate the ratio?

How do we make sure it meets assumptions?

A

tapply(dv,iv,var)

We would look at the lowest and the highest and divide them to get the variance ratio.

If it is less than 10, it will meet the assumptions.

84
Q

How would you write out the assumption interpretation?

A

• Visual inspection of the histograms, normality plots, and
box‐and‐whisker plots indicate that the data are normally
distributed for the Counting and Intentional groups.The
Rhyming and Imagery groups appear to be slightly
positively skewed, and the Adjective group appears to be
slightly negatively skewed.However, ANOVA is relatively
robust to minor violations of normality, so we can conclude
that the assumption of normality has been met.

• The data have met the homogeneity of variance
assumption, the ratio between the largest variance in the
Imagery group (var = 20.27) and the smallest variance in
the Counting group (var = 3.33) is 6.09, which is within the
10 to 1 ratio requirement.
85
Q

How would you write an ANOVA interpretation?

A

A one‐way analysis of variance revealed that there
are significant differences, on average, between the
treatment groups on words recalled, F(4,45) = 9.08,
p

86
Q

What can we conclude about the IV and the DV when our F-value is significant?

What can we show in order to show the direction of the findings?

What doesn’t it tell us?

A

We can conclude than the IV had an effect on the DV.

We can show the mean and SD to show the direction of our findings.

It does not tell us which groups are different from each other.

87
Q

What can we conclude from looking at a confidence interval on a marginal means graph?

A

It helps to visualize results. If the 95% confidence intervals overlap, then those 2 groups are not different from each other.

88
Q

Why is F-test an omnibus test?

A

It’s an overarching test-does not tell us specifics, just the overall effects.

89
Q

Instead of doing multiple t-tests for an a priori, what can we do?

A

We can compute Contrasts using linear or orthogonal contrasts (adding to zero).

We use linear for a priori
Orthogonal for post-hoc

90
Q

Why do we incorporate contrasts?

A

We want to compare a set of groups to another set of groups.

91
Q

What are the steps to computing linear contrasts?

A

We set a linear combination of group means, where ‘a’ is the contrast, and we multiply it against the Group mean and then this linear combination gets summed and that sum becomes Psi.
Then it gets multiplied into MScontrasts.

92
Q

How do we compute a linear contrast between 1, 2, and 3 groups?

A

We will need to compare the average of the first two groups and divide it by the 3rd group.

We will code them using negative and positive contrasts.

ex) (1/2)mean1 + (1/2)mean2 +(-1)mean3 = 0

93
Q

What is the F-value for an a priori comparison?

A

MScontrast/MSerror (from the anova table)

94
Q

What are we creating when we do an orthogonal contrast?

A

We are creating a contrast vector into a contrast matrix.

95
Q

What’s the benefit of doing an orthogonal contrast?

A

We can look at p values without concern of inflation of type 1 error.

96
Q

Which 2 post-hoc tests are similar to one another?

A

Scheffe and Bonferroni. They are both more conservative.

97
Q

Post-hoc Interpretation:

A

A one‐way analysis of variance shows that the effect of memory
condition on recall is significant, F(4,45) = 9.08, p .05.Furthermore, average recall in the Adjective (M = 11.00, SD
=2.49) condition did not significantly differ when compared to all
other conditions, p > .05.

Thus, we can conclude that the level of
recall generally increased with the level of processing required.

98
Q

What type of matrix does Factorial ANOVAs deal with?

A

Data Matrix = There are margins and cells.

99
Q

What are the steps for Stage 1 in a factorial ANOVA?

A

To compute the SS and df between, within, and total using cell and marginal means.

100
Q

What is the second stage of a factorial ANOVA?

A

Computing main effects and interactions (cell and marginal means) =

SSa
SSb
SSaxb = ssbetween - ssa - ssb
MSa
MSb
MSaxb
Fa = MSa/MSwithin (from part 1)
Fb = MSb/MSwithin 
Faxb = Msaxb/MSwithin
101
Q

Effect size formula. Think about why this would be the formula.

A

Ssbetween/sstotal

102
Q

According to the book, F is the __________ of the model to its _________.

Explain why this is the case.

A

The F is the ratio of the model to its error.

It compares the amount of systematic variance in the data to the amount of unsystematic variance.

103
Q

The book says the easiest way to calculate the SSb is:

A


1 Calculate the difference between the mean of each group and the grand mean.
2 Square each of these differences.
3 Multiply each result by the number of participants within that group (nk).
4 Add the values for each group together.”

104
Q

Knowing SST and SSM already, the simplest way to calculate SSR is to:

A

subtract SSM from SST (SSR = SST − SSM)”

105
Q

What are we calculating with MS (mean squares?) and why?

A

The average of the sum of squares so we can eliminate bias.

106
Q

What does assumptions generally tell us?

*homogeneity of variance

A

Whether the F value we got is reliable and based on normal distribution.

That is, the variances in each EXPERIMENTAL (between) conditions need to be similar (homogeneity of variance), and distributions in within groups are normal, and observations should be independent.

107
Q

According to Preston, which is more conservative: Tukey HSD or Scheffe?

A

The Scheffe

108
Q

What are we comparing in regards to main effects?

A

We compare means ignoring the particular conditions

109
Q

What are we computing in regards to main effects?

A

We compute average of the cell means across rows and columns

Cell and Marginal means = main effects

110
Q

Factor A x Factor B

A = Young and Old
B = Recall Group

What is the main effect for A and B?

A

Main effect for Factor A is the difference between old and young, regardless of Factor B.

Main effect for Factor B is the difference between recall groups, regardless of Factor A.

111
Q

What is interactions actually looking at, in regards to:

Cell and marginal means

and

Conceptual

A

We are looking at patterns of the cell means and comparing it to the marginal means.

Concept: The mean differences of the individual treatment conditions (cells) that are different from what would be predicted based on the main effects. It is the leftover/residual stuff that’s unexplained by factor A or B!

112
Q

How do we write a notation for interaction?

A

There is no notation for interaction, either write there is an interaction or there isn’t. In the interpretation, we note that factor a partly depends on factor b.

113
Q

What is the denominator of the F ratio telling us?

A

The denominator of an F ratio is the variance expected if there is not treatment effect. It’s not explained by the main effects.

114
Q

Visually inspecting a plot graph of factor a x b, how will we know if there is a main effect?

Provide example from class where males scored higher than females in STD knowledge.

A

In class, we went over male and female knowledge on STD, depending on video or pamphlet.

The males scored higher (higher on graph) than females. The lines are parallel, so there is no interaction.

There was no b effect because averaging the gender effect for video

115
Q

Assess assumptions of normality visually. Know what all the skews, kutosis; etc look like.

A

Okay!

116
Q

How do we calculate variance between 5 levels in a factor?

How will we know if we met assumptions?

A

Divide the HIGHEST by the LOWEST. If this number is within the 10 to 1 ratio, homogeneity of variance assumptions have been met.

117
Q

How would we obtain the main effect and interaction in r?

A

fit

118
Q

Where in the ANOVA table in r will we find the MSerror?

A

[4,3]
4th row
3rd column

119
Q

If there is a Main Effect in a factorial anova for both 2x3, do we have to do more?

A

Yes, for the 3 level group, we

120
Q

If there were main effects in a factorial anova for a 2x3 and no interaction, would we compute anything beyond?

A

We do not compute anything more for factor a because it’s 2 groups.

For factor b, If and only if the interaction was NOT significant would I go beyond in exploring the main effects (into a multiple comparison post-hoc like Tukey HSD).

121
Q

What do we focus on in the post-hoc analyses in a factorial ANOVA?

A

A significant interaction effect.

122
Q

Okay, so now I have a significant interaction. Now what?

A

Post-hoc, Simple effects time baby.

123
Q

When do we compute Simple Effects?

Provide examples using the old/young and recall group data in our class.

A

When we want to compare a difference one factor level to multiple factor levels. Essentially comparing the effect of one whole factor (with all its levels) to one level of the other factor. It is an uneven comparison.

Ex: We are interested in seeing the old and young group (factor a) to adjective recall group (factor b).

  • We compare the means of young and old participants for only the adjective condition.
  • We compare the means of young and old participants for only the intentional condition.
  • We compare the means for ALL recall group conditions only for old participants.
124
Q

How do we compute Simple Effects? What is it essentially?

A

It is essentially several F-ratios.

We compute it by isolating 1 row and column of the FANOVA.

125
Q

When we calculate the main effects for factor A and B, what is the 2nd part after we do sum of squares using the cell and marginal means minus the grand mean?

Where is it tricky?

Where else is it tricky?

A

We multiply.

(n)(Ka or Kb)(ss)

It’s tricky because we need to flip the Ks.

it is also tricky at the level of the simple effects.

126
Q

What is partialling dependence?

A

It is when the between subject variability is removed so that the within subject manipulation can be isolated.

127
Q

What are the benefits of repeated measures over ANOVA?

A

More efficient
Increased control of subject variability
Reduced error term
Increased power of study

128
Q

What are the limitations of repeated measures

A

carry-over effect
fatigue effect
lack of anynomity

129
Q

At what df for the error term will be not have to address normality anymore?

A

20 df…

130
Q

What are the departures from sphericity?

A

Greenhouse and Geisser and Huynh and Felt…

regardless of the covariate, the f-ratio will be approximately distributed

131
Q

What does the Greenhouse and Geisser do?

A

It adjusts the p value to an appropriate value