Biostats test 4 Flashcards
What is binary logistic regression
prediction of a binary-valued DV on the basis of other variables, so response variable is not continuous, but binary-valued, as in:
yes/no, has/does not have, alive/dead, increased/decreased
values of outcome variable
failure (coded 0) or success (coded 1)
success means “has the property”
what do we try to predict with binary logistic regression
the probability of success as a function of covariates
p(success) = f(cov1, cov2, …)
Assumptions of binary logistic
- categories must be mutually exclusive (no overlap) and collectively exhaustive (all cases can be assigned)
- if so, for all cases, success or failure can be coded in the data
in logistic regression, for predicted probabilities to be meaningful, they must…..
lie between the values of 0 and 1
Assumptions of logistic regression
categories must be mutually exclusive (no overlap) and collectively exhaustive (all categories can be assigned)
Link function
transforms the dependent variable so outcome range can be restricted to be between 0 and 1
Similarities of BLR with OLS
- model building and its issues (colinearity, order of entry, influential cases)
Dissimilarities of BLR with OLS
- DV is binary (categorical), not continuous
- Interpretation of coefficients
- Assessment of model fit/quality of obtained model
How do we ensure the [0,1] restircuted outcome range for the predicted values
- link function (logit) is used to relate the linear model part to the outcome variable
- it transforms the predicted values so that the outcomes are restrained to fall in the meaningful 0 to 1 range
- Regression techniques that make use of some kind of link function are called Generalized Linear Models (GLM)
The logit function
- Natural logarithm of odds
- Logit = ln(odds) = ln(y hat/1-y hat)
The logistic regression model when combined with the logit (link) function
ln(y hat/1-y hat) = b0 + b1X1 + b2X2 + …
So the difference with OLS model is on the dependent variable side of the model
In Logistic Regression, rather than looking at the B coefficients themsleves, we look at
- ODDS RATIO = e to the bower of b, where e is the base of natural log ln
- change in the odds (of success) for a one-unit change in the predictor
Wald test
Gives the p-value of the odds ratio
Under H0, odds ration is
- Odds ratio = 1
What if we want to calculate the odds ratio for a non-unit size
multiply regression coefficient by the size before you raise e to the power of the coefficient
Multiplicative effect
The combined effect of predictors on the DV is a product of separate effects, so the effects of odds ratios multiply in binary logistic regression, while they add up in ordinary linear regression
From probabilities to classification - what is classification of cases based on?
The predicted probabilities for success. The default setting for classification as ‘success’ is a predicted probability for success > .50.
confusion matrix
the confusion matrix in your output gives predicted versus observed successes and failures, based on the cut-value (always think of a 2x2 table).
Classification errors
false positives and false negatives
The null model
starting point in the model building, a model without any actual predictors
if have to predict without knowing anything about predictors, the LARGEST CATEGORY WINS!
How to decide which covariates to include
in the end, need to classify cases as success or fail cases - two groups
Suppose you have a continuous covariate and success and fail groups differ on mean → covariate may help to classify as success or fail
Suppose you have categorical covariates, and success and fail groups have different distributions. Chi-square test of homogeneity will be significant → categorical covariate may help to classify as success or fail
Receiver operating characteristic curve
a plot that illustrates the performance of a binary classifier system for different discrimination methods
typically gives true positive rate (specificity) against the false positive (1 - sensitivity) rate at various cut value settings
allows to see how diff cut value settings affect that classification results of your classifier
lenient threshold (low cut value): sensitivity is up!
strict threshold (high cut value): specificity is up!
Optimum typically: combination of high sensitivity and high specificiity
A lenient (low) cut value leads to
higher sensitivity! Success easily detected (at the expense of increased false positives)
A strict (high) cut values leads to
higher specificity! Good at keeping false positives down, but less sensitive
ROC curve
area under curve is overall indicator of diagnostic accuracyw
what does (P/1-P) in the logistic regression model represent
It represents the ODDS of the event happening
what data does logistic regression model use to calculate probabilities, eg in the height and gender example and how does it do it
Data it uses:
- predictor variable (x)
- binary dependent vairble
How it uses it:
- during training the model fits the data to the logit = B0 + B1X equation, and learns B0 and B1 by estimating
how X changes with Y - uses predictor variables to calculate log odds (by inputting the predictor into the logistic regression equation), then converts them into probabilities using log transformations
- uses the cutoff to classify the observation into one of the two outcomes
odds ratios
change in the odds (of success) for a one-unit change in the predictor
under H0, what is the odds ratio for logistic regression model
1
survival analysis is about
analyzing time-to-event data
examples of descriptive research questions that would warrant survivial data
median survival time in clinical studies
how does the probability for an event change over time?
examples of inferential research questions that would warrant survivial data
- What variables explain the time to event best?
- Do they shorten or lengthen the expected time?
- By how much?
mortality
refers to dropout in data
censored data
when you have partial information: if a case drops out, you do not know when if ever, the endpoint was reached. survival analysis is developed to take censored data into account.
right-censored
event not experienced at the end of the study - might or might not occur later, we don’t know
left censored
event occufrs during study, but starting point lies before start of study, so exact time interval is not known
interval censored
event occurs during study, but not exactly known when
non-censored data
start time is known, end point of interest reached during study
non-informative sensoring
censoring is not related to the likelihood of developing the event of interest
hazard rate
the probability for event to occur in a next (small) time interval, assuming the event has not yet occured. like instantaneous risk
survival rate
cumulative probability for non-event for a certain amount of time after some starting point
Mortality
Subjects are lost to follow-up (drop out) before the end of the study so you do not know if/when the end point was ever reached. this is one of the reasons for not being able to use logistic regression for time-to-event data.
censored data
incomplete time to event data
right censored cases
event not experienced at the end of the study It might or might not occur later –we don’t know
could be dropout too
left censored cases
event occurs during study, but starting point lies before start of study, exact time interval not known
interval censored cases
event occurs during study, but not exactly known when
non-censored data
start time is known, end point of interest reached during study
non-informative censoring
when in survival regression, we assume that censoring is not related to the likelihood of developing the event of interest and that that subjects whose data are censored would have the same distribution of times to event, had they actually been observed
hazard rate
instantaneous risk: the probability that if case survived to time t, event will be experienced in the next time interval t + Δt
survival rate
cumulative probability for the non-event for a certain amount of time after some starting
median survival time
time to event on average
Describing survival data using life tables
- Break down range of survival times into smaller time slices
- Tabulate the counts of all relevant events (including censored data) per time slice
- Probabilities for ‘at risk’, of ‘dying’, and of ‘surviving’ can be computed on the basis of these counts
- look at cumulative survival data per time slice
Kaplan-Meier life table
- every case has its own row in data file
- For each case, information about the survival time and censoring is entered
- The resulting curves are more smooth than for grouped life tables, as time is specified per subject, not per (fixed) time slice
How to get around confounders playing a role in survival regression
If confounders play a role, life tables can present a misleading picture.
However, you can create survival curves for different categories and compare these.
what is the log rank test used for
comparing survival times for different groups
how does the log rank test work
it computes scaled difference between observed and expected number of events per time slice, which are then combined
what is cox proportional hazard regression used for
determining whether there is a significant relationship between one or more covariates and hazard, quantifying and testing these relationships, and generating a prognosis curve
dependent variable of the cox proportional hazard regression
ln(hazard)
prognosis curve
a predicted survival curve customized for any specific combination of covariate values
Building cox regression model
- baseline curve of mean values for all covariates
- model coefficients determined (if greater than 1, hazard is up and survival is down)
what is h0 in
ln (h) = ln(h0) + b1X1 + b2X2 + … bkXk
baseline hazard - overall shape of curve
calculating h from ln (h) = ln(h0) + b1X1 + b2X2 + … bkXk
h = h0 x e^(b1X1 + b2X2 + ….)
So hazard is baseline hazard multiplied with exponential of model
hazard ratio (relative risk) explanation and interpretation
Exp(b) or e to the power of b
relative risk of experiencing the event (e.g., failure, death) between two groups or for a one-unit increase in a predictor.
Proportional hazard assumption
Effects of covariates on the likelihood for the event are assumed constant over time, so ratio of hazards is constant (proportional) over time