Field Ch. 20 - Logistic Regression Flashcards

1
Q

Why use Logistic Regression?

A

to predict binary categorical outcomes

from categorical & continuous predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the possible range of model outcome values?

A

values between 0 (outcome will not occur) and 1 (outcome will occur)

ex: 0.45

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

R and R^2 tell us what?

A
  • express model fit
  • pos value = as predictor goes up, likelihood of outcome occurrence goes up
  • neg value = as predictor goes down, likelihood of outcome occurrence goes down
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s the Wald statistic do?

A

determines significance of individual predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Odds ratio tells us what?

A

change in odds due to one unit change in predictor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What to look for before analysis?

A
  • missing data (predictor & outcome): bad

- complete separation (graph with no overlap on x-axis): bad

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

6 assumptions for BLR

A
  1. binary outcome
  2. independent observations
  3. no multicollinearity (run correlation matrix, pearson’s less than 0.7)
  4. no extreme outliers
  5. linear reln btwn ind. variables and logit of dep variable (Eric did not discuss)
  6. large sample size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

SPSS step 1

A

Analyze > Regression > Binary Logistic

  • enter independent variables in blocks
  • assign reference group (‘categorical’ button)

(less about memorizing and more about familiarity with boxes to check during analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SPSS step 2: ‘Save’ button

A

select:

  • predicted probabilities
  • group membership
  • Cook’s (should be less than 1)
  • standardized residuals (few above 1.96, none > 3)
  • include covariance matrix
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SPSS step 3: ‘Options’ button

A

select:

  • hosmer lemeshow
  • CI @ 95%
  • include constant in model

(less about memorizing and more about familiarity with boxes to check during analysis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

SPSS: How to transform variables

A

Transform > Recode into different variable

  • move old variable to center block
  • type new variable name, ‘Change’

Old and New Values

  • enter old and new values, ‘Add’
  • ‘Continue’
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Reporting results: where do you find X^2?

A

Omnibus table

X^2 (df, N) = ###, p < Sig

  • X^2 (value)
  • df (number)
  • model significance (p value)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reporting results: where do you find odds ratio?

A

‘Variables in the Equation’ table

  • Exp (B)
  • remember your reference group (set in the ‘Categorical’ menu)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Reporting results: where can you find R^2 values?

A

‘Model Summary’ table

  • Cox & Snell -or- Nagelkerke
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reporting results: what is Exp(B)?

A

Odds ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does X^2 tell us?

A

difference between observed frequencies and expected frequencies if an effect didn’t exist

17
Q

What does Cook’s D(istance) tell us?

A
  • the influence of a data point

- should be less than 1

18
Q

What does Hosmer-Lemeshow tell us?

A
  • R^2

- another model fit indicator (others are Cox & Snell, Nagelkerke)

19
Q

What do standardized residuals tell us?

A
  • strength of the difference between observed and expected values
  • few > 1.96
  • none > 3