Logistic Regression Flashcards
How do we fix the problem of using categorical data?
We predict the probability of Y not just Y.
What do we have to do next when we are predicting the probability of Y?
We have to linearise the S-Shape curve so that we have a straight line and not a curve!
This transformation makes interpreting the results differently.
What does ML stand for?
Maximum likelihood.
What is Y in the logistic regression equation?
It is log(Odds) OR log(p/1-p).
How do you conduct a logistic regression in SPSS?
We don’t need to know this for the exam.
What is the first step in interpreting the SPSS results?
We check the null model (block 0).
What is the null model?
It is the model without including any predictor.
This value has to be beat when we include the predictors.
What are the two Pseudo R-squareds that we use for the model fit and what are their cut-off points.
CoxSnell - 0 to 0.75.
Nagelkerke - 0 to 1.
How well you can predict the DV based on the IVs.
What is the other model fit we can use and how do we know if the model fits or not?
Goodness of fit - Hosmer-Lemeshow.
P-value must be non-significant.
What must Block 1 overall percentage be for the results to be significant?
It must be larger than Block 0 overall percentage.
How do we test for multicollinearity in logistic regression?
We can’t… Have to run the original model as linear regression model and check VIF.
How do we test for linearity in logistic regression?
Compute natural log of predictor and test the interaction.
Interaction significant = linearity issue.
Look at significance in the table of predictor by log transformed predictor.
What is B in the SPSS table?
Log(Odds).
How do we interpret B?
It is the change in Log(Odds) for one unit increase in our predictor variable.
Not very intuitive.
What should we interpret instead of B?
It is better to interpret this in probabilities.