Self Made Flashcards

1
Q

In univariate ANOVA define between groups variance, and within groups variance.

A

people per group x sum of squared differences between group means and grand mean = estimate of between groups variability

sum of squared differences between individual scores and group mean = estimate of within groups variability
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Xij?

A
Any DV score (One way anova)
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is mew (u).?

A
the grand mean
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is tau j?

A
The effect of the j-th treatment
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is e ij?

A
error for i person in j-th treatment
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the structural model of 1-way ANOVA?

A
Xij = mew. + tau j + e ij
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is an expected value of a statistic defined?

A
The ‘long-range average’ of a sampling statistic
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the null hypothesis for a 2-way ANOVA interaction?

A
if there are differences between particular factor means, they are constant at each level of the other factor (hence the parallel lines)
the ‘difference of the differences’ is zero
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the F test in one way ANOVA?

A
F=MStreat/MSerror
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is Xijk?

A
Any DV score in 2 way ANOVA
#Lecture 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are alpha j, beta k and alpha-beta jk?

A

the effect of the j-th treatment of factor A
the effect of the k-th treatment of factor B
the effect of differences in factor A treatments at different levels of factor B treatments
#Lecture 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the structural model of 2 way ANOVA?

A

Any DV score is a combination of the grand mean; the effect of the j-th treatment of factor A; the effect of the k-th treatment of factor B; the effect of the differences in factor A treatments at different levels of factor B treatments; and error for i person in j-th and k-th treatments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the assumptions of ANOVA?

A

Population (normally distributed and have same variance
Sample (independent, random sampling, at least 2 observations and equal n)
Data (interval or ratio scale, not more appropriate for other scales)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the conventions for small, medium and large effect sizes?

A

0.2 = small
0.5 = medium
0.8 = large
#Lecture 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the difference between eta squared and omega squared?

A
Eta-squared describes the proportion of variance in the sample's DV scores that is accounted for by the effect, omega squared describes the proportion of variance in the population's DV scores.
Omega is a more conservative estimate.
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is partial eta squared?

A
The proportion of residual variance accounted for by the effect.
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the omnibus tests for a two way ANOVA?

A
Main effect of factor one, factor two, and the interaction effect.
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When do you use a protected t-test?

A
When there is a significant main effect in a 2 way ANOVA, can only compare two means at a time (need to do linear contrasts)
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What do simple effects do?

A
Simple effects test the effects of one factor at  each level of the other factor
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Where do you get the degrees of freedom for simple effects?

A
Omnibus ANOVA table for the error, and the other df are the same as that of the associated main effect
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the degrees of freedom for linear contrasts?

A
df error=N-ab
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are simple comparisons?

A
T-tests comparing cell means; exactly the same as main effect comparisons but using different means.
#Lecture 3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What happens in more than 2x3 factorial ANOVAs that differentiates it from a 2x3?

A
Two way interactions, three-way interactions, etc
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In higher-order designs, what are the three different kinds of effects and what do they tell you?

A

main effects:
differences between marginal means of one factor (averaging over levels of other factors)

two-way interactions:
the effect of one factor changes depending on the level of another factor (averaging over levels of a third factor)

three-way interaction:
the two-way interaction between two factors changes depending on the level of the third factor
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How do you follow up a significant 2-way interaction in a 3-way factorial ANOVA?

A

just as in a 2-way ANOVA, a significant omnibus 2-way interaction must be interpreted
e.g., is the effect of Factor A different at different levels of Factor B, and vice-versa? (ignoring Factor C)
we then test simple effects (with the F test), exactly as we did in 2-way ANOVA

if you find a significant simple effect for a factor with > 2 levels, you follow it up with simple comparisons (with t-tests or linear contrasts), exactly as we did in 2-way ANOVA

simple interactions -> simple simple effects -> simple simple comparisons
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Why do we use simple interactions to break down 3 way interactions into a series of 2-way interactions at each level of the third factor?

A

this gives a first close-up look at where the differences between cell means might be
once we know this, we can follow up these simple 2-way interactions further to figure out where the differences are (simple simple effects & simple simple comparisons / contrasts)
 just as we follow up an interaction in a 2-way design

 in a 3-way design there are three potential follow-up      steps  (compared to two in a 2-way design)
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the first step in investigating a significant 3 way interaction?

A
simple interaction effects break down the 3-way interaction into a series of 2-way interactions at each level of the third factor
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Why is it important not to get confused between doing 2-way ANOVAs at each level of the third factor and doing simple interaction effects in a 3-way design?

A
F ratios calculated for these tests are different, simple interaction (a) uses pooled error term; (b) is conducted after significant 3-way interaction is observed
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the difference between simple effects and simple simple effects?

A
simple effects are follow-ups after an omnibus 2-way interaction & examine the effect of factor A at each level of factor B
simple simple effects are like simple effects except that they examine  the effect of factor A at each level of factor B, at each level of factor C (i.e., within each combo of B & C)
#Lecture 4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

[Simple] What are the three tests used to follow up significant 3 way interactions?

A

Simple interaction effects
Simple Simple effects
Simple Simple comparisons
#Lecture 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What test is used in simple simple effects?

A
F ratio/test
#Lecture 5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Effects tests tend to use which kind of test? And which kind do comparisons tend to use?

A
F test and t-test
#Lecture 5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Review Question: what are type 1 and type two errors? And which greek letter is used to represent each?

A

type-1 error = finding a significant difference in the sample that actually doesn’t exist in the population. Alpha

type-2 error = finding no significant difference in the sample when one actually exists in the population. Beta
#Lecture 5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are the technical and useful definitions of power?

A

technical definition:
the probability of correctly rejecting a false H0
mathematically works out to 1 - Beta
( Beta = type-2 error = probability of accepting false H0)

useful definition:
 the degree to which we can detect treatment effects  (includes main effects, interactions, simple effects, etc.) when they exist in the population
#Lecture 5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are the two kinds of power? Note: Not statistical notation (eta/omega)

A
observed power (post hoc power)
predicted power (a priori power)
#Lecture 5
36
Q

What factors does power depend on, and how do they affect it?

A
significance level, alpha  
relaxed alpha ->  more power
sample size, N 
more N  ->  more power
mean differences, μ0 – μ1
larger differences ->  more power
error variance – σe2 or MSerror 
less error variance -> more power
#Lecture 5
37
Q

What does effect size (d) indicate?

A
d indicates about how many standard deviations the  means are apart, & thus the overlap of the two distributions
#Lecture 5
38
Q

What are the approximate d values and percentage overlap for small medium and large effect sizes?

A

Small 0.20 85%
Medium 0.50 67%
Large 0.80 53%
#Lecture 5

39
Q

What is the desirable minimum optimal level of power?

A
.80
#Lecture 5
40
Q

What information do you need to estimate power both a priori and post hoc, and where do they come from?

A

A priori: estimate of effect size; estimate of error (MSerror)
we typically use previous research to estimate the likely effect size and MSerror

Post-hoc: effect size, error (MS error), N in study
we get these estimates from our dataset
#Lecture 5

41
Q

What are the three caveats for investigating power?

A
An effect must exist for you to find it- all the power in the world won't help you find effects that don't actually exist
Large samples can be bewitching and detect very small, unimportant and unstable effects
error variance is still important, high error variance can make large effects non significant
#Lecture 5
42
Q

How do you maximise power, four strategies?

A
focus on studying large effects
increase sample size
increase alpha level
decrease error variance
#Lecture 5
43
Q

What are the main issues with three of the four strategies for maximising power (large effects, sample size, alpha level, decrease error variance)?

A
Few large effects in psych
Practical constraints with sample size
Alpha level affects type one error
#Lecture 5
44
Q

How do you reduce error variance?

A

improve operationalisation of variables (validity)
improve measurement of variables (internal reliability)
improve study design (blocking)
improve methods of analysis (ANCOVA)
#Lecture 5

45
Q

What is error variance?

A
Variation in DV scores from sources other than IVs
#Lecture 5
46
Q

What’s the difference between correlation and covariance?

A
correlation is standardised covariance
#Lecture 6
47
Q

What is r squared?

A
the coefficient of determination
proportion of variance in one variable that is explained by the variance in another
#Lecture 6
48
Q

What is 1 - r squared?

A
error or residual variance in data 
#Lecture 6
49
Q

What is r adj?

A
An adjusted form of r that is more representative of the population; always more conservative than r.
#Lecture 6
50
Q

In regression, what is the best predictor of Y when X is unknown?

A
Y mean
#Lecture 6
51
Q

What does S Y.X reflect, and what is it called?

A
Standard error of the estimate, reflects the amount of variability around the regression slope
#Lecture 6
52
Q

What is the least squares criterion?

A
How regression lines are fitted- so that the sum of the square of Y minus Yhat is a minimum (such that errors of prediction are a minimum)
#Lecture 6
53
Q

What is the difference between ANCOVA and blocking?

A
Continuous variable vs categorical; in ANCOVA treatment means are adjusted to account for differences on the covariate (in case covariate differ across groups, ANCOVA effectively partials out the effects of the covariate from the focal IV as well as the error term).
#Lecture 6
54
Q

Why do we want ANCOVA to adjust treatment means (DV)?

A
if focal IV affects DV scores -> there is a significant difference among treatment means between the levels of the IV
if covariate also differs between levels of focal IV -> which variable explains difference in DV treatment means?     confound!
we care about the effect of the focal IV, not the effect of the covariate
ANCOVA teases apart the effects of the covariate and the IV by asking the question: “would the focal IV have an effect on the DV if all participants were equivalent on the covariate?”
#Lecture 6
55
Q

What is the logical question behind ANCOVA?

A
Would groups differ on the DV if they were equivalent on the covariate?
Note: this refines the error term by subtracting covariate predictable variation, and refines treatment effect to adjust for systematic group differences on covariate
#Lecture 6
56
Q

What are the assumptions of ANCOVA?

A
ANOVA assumptions (homogenous variance, normal distribution, independence of errors)
relationship between covariate and DV is linear
relationship between covariate and DV is linear within each group
relationship between DV and covariate is equal across treatment groups - homogeneity of regression slopes
#Lecture 6
57
Q

What are the two statistics in multiple regression, and what is used to test them?

A

Strength of overall relationship (R squared) tested by an f test
Importance of individual predictors (b, beta, sr) tested by an F test
#Lecture 7

58
Q

In bivariate regression, what is the coefficient of determination?

A
The proportion of variance in one variable that is explained by the variance in another
#Lecture 7
59
Q

What does a partial correlation (pr squared) measure?

A
the proportion of residual variance in the criterion uniquely accounted for by predictor 1 [ A / (A+B) ]
#Lecture 7
60
Q

What does a semipartial correlation (spr squared) measure?

A
the proportion of total variance in the criterion UNIQUELY accounted for by predictor 1 [ A / (A+B+C+D) ]
#Lecture 7
61
Q

What is the difference between a semipartial and partial correlation?

A
Semipartial correlations use all of the variance in the DV, whereas partial correlations only use the residual variance.
#Lecture 7
62
Q

What are the structural differences between ANOVAs and Multiple Regressions?

A
MR tests the overall model automtically, but does not test for interactions automatically. MR tests unique effect, ANOVA tests main effects (ANOVA assumes no correlation between IVs)
#Lecture 7
63
Q

What is the principle of parsimony (as it relates to regression)?

A
That predictors should be highly correlated with criterion (validities) and low correlations with each other (collinearities).
#Lecture 7
64
Q

How is shared variance for multiple regression calculated?

A
R squared minus the sum of the squared semi-partial correlations.
#Lecture 7
65
Q

What do partial regression coefficients (bs) test?

A
Importance of each predictor in the context of all other predictors, divide b by its standard error
#Lecture 7
66
Q

What are the assumptions of multiple regression?

A
Distribution of residuals (conditional Y values normally distributed around regression line; independence of errors, homogeneity of variance)
Scales (variables normally distrubuted, linear relationship predictors and criterion, predictors not super highly correlated, continuous scale)
#Lecture 8
67
Q

What is the difference between moderation and mediation?

A

Moderation focuses on the direct X-Y relationship and how Z adjusts it, moderator often uncorrelated with IV
Mediation focuses on indirect relationship of X-Y via M (X causes Y because X causes M which in turn causes Y). Mediator is correlated with IV
#Lecture 8

68
Q

What does MMR stand for?

A
Moderated Multiple Regression
#Lecture 8
69
Q

What are the conditions for mediation in MR?

A
Strong theory
IV predict mediator
IV predicts DV in block 1
Mediator predicts DV in block 2
Coefficient for IV should decrease
Sobel test or bootstrapping analyses should be significant
#Lecture 9
70
Q

What is error in a repeated measures design?

A
The interaction of Factor A and Participant i.e. the changes in the effects of A across participants.
#Lecture 10
71
Q

How do you calculate treatment effect for a participant at a condition in a repeated measures ANOVA? (Same for the mean scores of each condition).

A
The treatment effect of being in the jth condition is equal to the condition mean minus the overall mean (not grand unless using condition sums).
#Lecture 10
72
Q

Why do repeated measures ANOVAs tend to have more power than between participants?

A
Repeated measures accounts for individual differences separately.
#Lecture 10
73
Q
Describe the following definitional formulae for repeated measures ANOVA
Total variability (SS T)
Variability due to factor (SS A)
Variability due to participants (SS P)
Error (SS AxP)
A

Total variability – deviation of each observation from the grand mean:

Variability due to factor – deviation of factor group means from grand mean:

Variability due to participants – deviation of each participant’s mean from the grand mean:

Error – changes (inconsistencies) in the effect of factor across participants (TR x P interaction):
#Lecture 10
74
Q

What is the difference in calculation from within participants ANOVA and between participants ANOVA?

A
Only the error term and degrees of freedom change- all other calculations are the same.
#Lecture 10
75
Q

How do simple comparisons for repeated measures designs differ to that of between participants ANOVA?

A

MS error is partialled out, so a separate error term (based on the interaction of the comparison with the participants term) is calculated for each comparison.

76
Q

How do you calculate error terms for omnibus tests in 2 way within participants designs?

A

The omnibus test times participants (P)

77
Q

What are the assumptions of the mixed-model approach to ANOVA?

A
Sample randomly drawn from population
DV scores normally distributed in population
compound symmetry (homogeneity of variances in levels of RM factor; homogeneity of covariances)
78
Q

What is compound symmetry?

A

All variances roughly equal (in variance-covariance matrix, T1 T1; T2 T2; and T3 T3 roughly equal)
and covariances are roughly equal (T1 T2; T1 T3; and T2 T3 roughly equal)

79
Q

What does Mauchley’s test of sphericity do?

A

examines overall structure of covariance matrix
determines whether values in the main diagonal (variances) are roughly equal, and if values in the off-diagonal are roughly equal (covariances)
evaluated as 2 – if significant, sphericity assumption is violated

80
Q

When does sphericity matter?

A

In within participants designs with 3 or more levels- if violated, F ratios are positively biased (towards type I errors)

81
Q

What is an epsilon adjustment?

A

epsilon is simply a value by which the degrees of freedom for the test of F-ratio is multiplied
equal to 1 when sphericity assumption is met (hence no adjustment), and

82
Q

What are the three types of epsilon, and what are the differences between them?

A

Lower-bound (act as if only have 2 treat levels with max heterogeneity; worst case violation of sphericity, very conservative)
Greenhouse-Geisser (size of epsilon depends on degree to which sphericity is violated 1>episolon>=1/(k-1); not to stringent or lax)
Huynh-Feldt epsilon
an adjustment applied to the GG-epsilon
often results in epsilon exceeding 1, in which case it is set to 1
used when “true value” of epsilon is believed to be >= .75

83
Q

Should you use MANOVA? Why?

A

Instead of adapting model to observed DVs, selectively weight or discount DVs based on how they fit the model.
Atheoretical, over-capitalises on chance

84
Q

What is the difference between mixed model ANOVA and mixed ANOVA?

A

Mixed model ANOVA is the normal model of repeated measures ANOVA.
Mixed ANOVA involves a within participants factor and a between participants factor.

85
Q

What are the assumptions of mixed ANOVA?

A

Normally distributed DV
homogeneity of variance: levels of between participants factor; assume within participant factor x participant interactions constant at all levels of between participant factor
variance-covariance matrix same at all levels of WPF
pooled (or average) variance-covariance matrix exhibits compound symmetry (c.f. sphericity)
usual epsilon adjustments apply when within-participants assumptions are violated

86
Q

In mixed ANOVA, how does ‘nesting’ work?

A

each participant is tested in only one group (nesting), but participates in each block (crossing)