Exam Flashcards
To work out F statistic
MS / MSerror
Between groups MS/ Within groups MS
If means are the same is there a significant difference?
No
How is something significant?
Below .05 or Below .01
How do you work out how many participants?
Df of within sample / number of groups
+1
What are priori contrasts used for?
Experimenter can make predictions about means
In a between groups two way ANOVA how many error terms are there?
1
MS axs
Within subjects error term for main effect of variable A
MS a
Within subjects main effect of variable A
MS s
Within subjects subjects of variable A
Familywise and per comparison equation
afw = c (apc)
What is a family wise error?
Multiple tests that increase type 1 error
What is per comparison error?
Single test that increases type 1 error
What is ANCOVA
Statistical control of error variability
When experimental control of error not possible
Ancova assumptions
Homoscedasticity- equal scatter
Heterogeneity of regression coefficients
Multicollinearity
ANOVA assumptions
Normal distributions Independent DV ratio / interval scale IV categorical Homogeneity of variance
Test reliability is a precursor of
Validity
Vectors set to 90 degrees have what rotatation?
Orthogonal
Rotations converged in 3 iterations are
Orthogonal
Types of Orthogonal roatation
Varmiax
Equamax
Quartimax
Types of oblique rotation
Direct oblimin
Promax
Which regression has a priori sequence of entry?
Hierarchical regression
What is a scree plot?
Eigenvalues on Y axis
Factors on X axis
What is a cross loading
Any loading greater than .4
And has difference if more than .2
What does KR-20 measure?
Internal reliability for measures with dichotomous choices (Yes/No)
Values up to +1.00
What does KR-20 stand for?
Kuder- Richardson
What is good internal reliability in regards to KR-20
Anything greater than .7
Three kinds of factorial design
Completely randomised factorial design ( 1 treatment condition / between groups)
Randomised block factorial design ( all treatments in randomised order within group)
Mixed factorial design
Criteria for different populations
At least some of the rules are different
Mean square equation of error
MSs/ab = SSs/ab / dfs/ab
What is a?
Sig
When is it more important to allow type 1 errors?
When important to find new facts
When is it important to allow type 2 errors?
When not clogging up literature
What is probability (p)
Probability of observed effect
Having assumed null hypothesis true
The normal distribution
Mathematical function 2 population parameters Distribution of scores u = mean o = SD
Different normal distributions are generated whenever
The pop mean or pop SD are different
What is normal distribution used for?
In order to make standardised comparisons
Across different populations and treatments
If shared area of normal distribution large
Populations similar
If shared area of normal distribution small
Populations different
In terms of normal distribution It is mathematically impossible for the shared area to
Ever equal zero
What is chi square distribution used for?
Testing sample and population variance are same / different
F distribution based on
Different distribution
If 2 populations are the same then F ratio will be
1
If two populations are different then F ratio will be
More than 1
The f ratio will further increase with
Further the difference
The F ratio depends on knowing the
Variances for two samples
Degrees of freedom associated with each sample, based on sample sizes
Chi squared and F distribution used for…
More statistical approach
( not normal distribution)
Looking at treatment populations
Chi squared 2 assumptions
Population normally distributed
Measure taken on interval / ratio scale
F ratio further 2 assumptions
Homogeneity of variances
Independent measures
The F equation
F = Xsquared a / sa-1
/ Xsquared b / sb- 1
Chi squared equation
X squared = (s-1) o squared s
/ o squared p
The F ratio is the ratio of
Two sample based variances
F value observed and critical value meaning
If Observed F value greater than critical F value then significant
The general problems of rejecting null hypothesis
Can always attribute some portion of difference to chance factors / error
What is the name of uncontrolled sources of variability in experiment
Experimental errors
Two types of error
Individual differences error
Experimental error
Experimental error is show by
Within group variability
Two estimates of experimental error are
Independent from eachother
But both reflect same value of experimental error
A systematic source of variability comes from the
Treatment effect
Unsystematic source of variability comes from
Experimental error of subjects and measurement
When population means are not equal this is the result of the …
Treatment effects
When population means are equal reflects
Experimental error alone
Sum of squares equation
SStotal = SSwithin + SSbetween
Basic ratio of variance
Score and sum squared divided by the number of items that contribute to the score and sum
For the purpose of ANOVA, variance is defined as
variance = SS / df
SSt is the
Total sum of squares
SSa is the
Between group sum of squares
SSs/a is the
Within group sum of squares
If null hypothesis true, ratio of between groups and within groups variability equal to
1
Partitioning the variability means
Subdividing total deviation
Total deviation equation
AS - T
Between groups deviation equation
A - T
Within groups deviation equation
AS - A
What do the parts of the deviation equation mean?
T = grand mean AS = total deviation AS = within group A = between group
Transforming the data reduces the chances of making which error?
Type 2 error
If ANOVA assumption broken it fails gracefully, eg
Miss real effects (Type 2 error)
But does not increase chances of making Type 1 error
Transformation of positive and negative skews
Moderate Substantial Severe
Moderate - square root
Substantial - logarithm
Severe / reciprocal
Designs which include multiple IVs are called
Factorial designs
For 2 way between groups design
F ratio is calculated for :
Main effects if IV 1
Main effect of IV 2
Interaction effect of IV 1 and 2
F ratio for main effect A
MSa / MSs/ab
F ratio for main effect of variable B
MSb / MSs/ab
F ratio for main interaction
MSab / MSs/ab
Mean square equation for main effect A
MSa =
SSa / dfa
Mean square equation for main effect B
MSb = SSb / dfb
Within groups
F ratio for main effect
Fa = MSa / MSaxs
Within subjects
F ratio for subject variables
Fs MSs / MSaxs
A significant main effect of subject variable is a problem when
Specific predictions are made about performance
When there is a hidden altitude treatment interaction
Within subjects additional assumptions
Sphericity
Homogeneity of treatment different variances
Compound symmetry
No need to test for sphericity of IV if it only has
2 levels
What do we want from Mauchleys test of sphericity
For value to be non significant
Homogeneity of variance
With mixed designs, what is there no such thing of?
One factor mixed design
What test for homogeneity of variance is used for split plot?
Box’s M
What test is used to test for normality?
Box’s M
How do we control in experimentation
Randomisation
Mean square equation for interaction
MSAB = SSAB / DFAB
Each sum of squares is calculated by combining two quantities called
Basic ratios
Degrees of freedom are the
Number of observations that are free to vary
When we already know something about those observations
Actual variance estimates are called
Mean squares
If means are the same there is no
Significant difference
If only 2 levels of a factor then
No analytical comparisons are required
T- tests can be performed instead
Definition of interaction effect
Means of the IVs differ with respect to the levels of the other IV