Lecture 4- Model Fit Flashcards
b1 estimates what
Parameter for a predictor
- Direction of relationship/effect
- Difference in means
b0 estimates what
The value of the outcome when predictors =0 (intercept)
Sums of squares represent
Total error
Because sums of squares are totals they can only be compared when
They are based on the same number of scores
df=
Degrees of freedom
N- p
Number of scores - number of parameters
What is SST
Total sum of squares
Total variability (variability between scores and the mean)
What is SSR
Residual sum of squares
- Total residual/error variability (variability between the model and the observed data)
- How badly the model fits (in total)
What is SSM
Model sum of squares
- Total model variability (difference in variability between the model and the grand mean)
- How much better the model is at predicting Y than the mean
- How well the model fits (in total)
Each sum of squared has
Associated degrees of freedom
The df is what in relation to SS
The amount of independent information available to compute SS (Sum of squares)
For each parameter estimated we lose
1 piece of independent information
To get SST we estimate
1 parameter
dfT=
N- p
Number of pieces of info - parameter
To get SSR we estimate
2 parameters (b0 and b1)
dfM=
dfT- dfR
The null model and the estimated model are distinguished by
b1, the slope
SST=
SSM + SSR
MS=
SS/df
Mean squared= sum of squares/ degrees of freedom
Sums of squares errors can’t compared based on
Different amounts of information
The average or mean squared error can be computed by
Dividing a SS by the amount of information used to compute it
The df quantifies
The amount of information used to compute a sum of squared errors
The F statistic is the ratio of
MSM to MSR
If the model results in better prediction than using the mean then
MSM should be greater than MSR
What r2 represent
- The proportion of variance accounted for by the model
- The Pearson correlation coefficient between observed and predicted scores squared
What does adjusted r2 represent
An estimate of r2 in the population (shrinkage)
How to enter predictors when there is more than one predictors in a model
- Hierarchical
- Forced entry
- Stepwise
What is hierarchical entrance to predictors
- Experimenter decides the order in which variables are entered into the model
- Best for theory testing
What is forced entry to predictors
All predictors are entered simultaneously
What is stepwise entrance to predictors
- Predictors are selected using their semi-partial correlation with the outcome
- Can produce spurious results
- Use only for exploratory analysis
SST is made up of
SSM and SSR
SSM is worked out by
Comparing the model to the null hypothesis
Slope- flat line