Lecture 4- Model Fit Flashcards

1
Q

b1 estimates what

A

Parameter for a predictor

  • Direction of relationship/effect
  • Difference in means
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

b0 estimates what

A

The value of the outcome when predictors =0 (intercept)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sums of squares represent

A

Total error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Because sums of squares are totals they can only be compared when

A

They are based on the same number of scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

df=

Degrees of freedom

A

N- p

Number of scores - number of parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is SST

Total sum of squares

A

Total variability (variability between scores and the mean)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is SSR

Residual sum of squares

A
  • Total residual/error variability (variability between the model and the observed data)
  • How badly the model fits (in total)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is SSM

Model sum of squares

A
  • Total model variability (difference in variability between the model and the grand mean)
  • How much better the model is at predicting Y than the mean
  • How well the model fits (in total)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Each sum of squared has

A

Associated degrees of freedom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The df is what in relation to SS

A

The amount of independent information available to compute SS (Sum of squares)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

For each parameter estimated we lose

A

1 piece of independent information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

To get SST we estimate

A

1 parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

dfT=

A

N- p

Number of pieces of info - parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

To get SSR we estimate

A

2 parameters (b0 and b1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

dfM=

A

dfT- dfR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The null model and the estimated model are distinguished by

A

b1, the slope

17
Q

SST=

A

SSM + SSR

18
Q

MS=

A

SS/df

Mean squared= sum of squares/ degrees of freedom

19
Q

Sums of squares errors can’t compared based on

A

Different amounts of information

20
Q

The average or mean squared error can be computed by

A

Dividing a SS by the amount of information used to compute it

21
Q

The df quantifies

A

The amount of information used to compute a sum of squared errors

22
Q

The F statistic is the ratio of

A

MSM to MSR

23
Q

If the model results in better prediction than using the mean then

A

MSM should be greater than MSR

24
Q

What r2 represent

A
  • The proportion of variance accounted for by the model

- The Pearson correlation coefficient between observed and predicted scores squared

25
Q

What does adjusted r2 represent

A

An estimate of r2 in the population (shrinkage)

26
Q

How to enter predictors when there is more than one predictors in a model

A
  • Hierarchical
  • Forced entry
  • Stepwise
27
Q

What is hierarchical entrance to predictors

A
  • Experimenter decides the order in which variables are entered into the model
  • Best for theory testing
28
Q

What is forced entry to predictors

A

All predictors are entered simultaneously

29
Q

What is stepwise entrance to predictors

A
  • Predictors are selected using their semi-partial correlation with the outcome
  • Can produce spurious results
  • Use only for exploratory analysis
30
Q

SST is made up of

A

SSM and SSR

31
Q

SSM is worked out by

A

Comparing the model to the null hypothesis

Slope- flat line