9 - Prognosis and Risk Flashcards

1
Q

define risk

A
  • a chance or possibility of disease
  • ## ie don’t have the disease starting out but trying to predict who will get the disease
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the purpose of a risk factor study

A
  • to estimate the probability of disease
  • to understand the mechanism of disease
  • to identify high risk populations
  • to inform lifestyle decisions
  • to inform the design of other studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

describe the risk study design

A

start with the disease free population then see if they are exposed to risk 1, 2, 3 (extent to which they are exposed to each risk) etc and see if they have an outcome or no outcome

  • like a cohort study but this time comparing risk/prognostic factors instead of comparing 2 therapies
  • think about it as a cohort study - need discriminative measures
  • can do prospectively but also maybe sometimes you cant (ie have to wait for the outcome/disease to occur which may take a while)
  • can also be done retrospectively (risk factor prognosis factors)
  • all the things we were worried about for cohort studies/therapies apply here
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

risk factor is synonymous with what terms?

A

predictor or independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define prognosis

A
  • an advanced indication for the course of the disease, a prediction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the purpose of a prognostic study?

A
  • to inform patients about what the future holds
  • to understand the course of the disease
  • to examine possible outcomes
  • to estimate the probability of each outcome
  • to inform treatment decisions
  • to inform the design of other studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

describe a prognosis study

A
  • they already have the disease, now want to know what will happen to them
  • just like the risk factor study but now we have an inception cohort (which is a population at a uniform and early stage in the disease) then look at prognostic factors and who ends up w what outcomes
  • check out pic on slide 5 and example on slide 8
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

how do prognostic factors/therapy studies relate

A
  • prognostic studies and risk factor studies will inform our therapy studies and vice versa
  • ie taking a certain therapy can affect your risk factor or prognosis (for example taking a baby aspirin is a therapy which can reduce the risk of heart attack - ie therapy and a prognostic factor)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is specificity and sensitivity again?

A
  • couldnt find in notes, but according to wiki
  • sensitivity = amount of true positives
  • specificity = amount of true negatives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how do you determine if a risk/prognosis study is internally valid? (4)

A
  • was an inception cohort assembled?
  • was the sample representative? (ie is the model robust?)
  • was follow up complete?
  • were objective/unbiased outcome criteria defined?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

was an inception cohort assembled?

A
  • ie are included patients in a prog study at similar points in the course of their disease?
  • who was not included/why
  • think about potential for over/under-estimation of true likelihood of outcome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

was the sample representative?

A
  • if interested in generalizability, need to know id the sample is representative of the population
  • are there systematic differences btw the study sample and the population of interest?
  • was the referral pattern described?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is a popularity bias?

A
  • for sample representativeness

- experts select or follow more interesting cases (non-experts get more routine cases)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a referral filter bias?

A
  • for sample representativeness
  • populations at tertiary centres much different than general population (ie most severe cases have been filtered out already or treated - not rep of population)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

was follow-up complete?

A
  • all members of the inception cohort should be accounted for at the end of the study and their clinical status should be known
  • assess the numbers lost to follow-up and their rate of outcome - lost data is usually not random! therefore can affect outcome
  • how does likelihood of outcome change if we input data using worst-case (having outcome) vs best-case scenario (not having outcome) - ie if risk factors change depending on wc/bc now little certainty associated w study
  • is there likely to be a difference btw complete and incomplete patients? loss of representative sample
  • larger sample and fewer missing data = the more certain you will be
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

were objective/unbiased outcome criteria defined?

A
  • measurement issues
  • has the criteria for diagnosis been clearly defined (explicit/objective criteria)?
  • ie for risk factor study criteria is for whether the person is disease free and for prognostic study criteria is for whether a person is at the beginning of disease
  • were the outcomes assessed in a consistent manner (all patients, same diagnostic test, same interval, same frequency, all assessed at study end - there is more control over this for a prospective study than retrospective)?
  • is the outcome assessor aware of concomitant prognostic factors (other features of the patient)? - again blinding of patient and practitioner is important (person could recall differently if they know what the study is about)
17
Q

what is diagnostic suspicion bias?

A
  • for validity of outcome criteria

- assessments more frequently or carefully bc of knowledge of other features

18
Q

what is expectation bias?

A
  • for validity of outcome criteria

- interpretation of the diagnostic test is influenced by knowledge of other features

19
Q

what is our goal when trying to predict for prognostic testing? what is the weight?

A
  • to define the magnitude of the contribution of each predictor (weight or Beta) so that our model fits the population, not just the sample
  • something we measure that isn’t predictive might have a weight of 0, whereas those that are predictive have a larger weight
  • oppositely predictive has a negative value
20
Q

what is the formula for outcome?

A

outcome = weight1 x predictor + weight2 x predictor + … + error

21
Q

what is overfitting?

A
  • producing a model that fits the sample but not the population
  • important for predictive models, similar to random sampling error (for therapy)
  • see example: slide 18
22
Q

how do you determine if the prognostic study is robust? - ie evidence that it fits the population and not just the sample?

A
  • have we seen the model come up in more than 1 study?
  • see if the model is data driven or hypothesis driven
  • adjustment for extraneous prognostic factors (ie model should include things that are well-established as predictors)
23
Q

what is a data driven model?

A
  • uses a regression approach to narrow down or identify predictors
  • this is a first-step efficacy-type approach
  • have a bunch of data from sample and let computer do the work (univariate, stepwise, back/forward)
  • good bc computers are error-free, but can’t think about what makes sense from a clinicians perspective so we get a rep of the sample, but not the population
24
Q

what is a hypothesis driven model?

A
  • predictors to be included in the model are defined apriori as a result fo clinical expertise or existing literature
  • when you have more experience about what is predictive (ie from literature etc)
  • this is a more pragmatic approach (ie figuring out whether it actually applies
25
Q

what is univariate testing?

A
  • like a t-test or chi square
  • looking at whether one prognostic factor is statistically different in those who have the outcome vs those who don’t and then one by one conducting all these statistical tests to see whether there is a statistically significant difference in the risk factors for those who do and do not have the outcome
  • similar to our issue with multiple comparisons (worried about spending our degrees of freedom or wearing out our data)
  • so if we keep doing these tests we will eventually find a model that fits, but not going to apply to our population
26
Q

what is stepwise backward/forward testing?

A
  • basically a button you push on the computer
  • stepwise will bring a risk factor in, look at how good it is, bring another one in, take something out, bring another one in, until it comes up with a model
  • sometimes you aren’t even aware of how many times it has brought something in or ou
  • these tests are problematic in terms of spending degrees of freedom
  • similar to our issue with multiple comparisons (worried about spending our degrees of freedom or wearing out our data)
  • so if we keep doing these tests we will eventually find a model that fits, but not going to apply to our population
27
Q

what is a test and training sample?

A
  • only a sample of data is used to construct the model and the rest is used to confirm the model
  • trying to show consistency btw the 2 models
28
Q

describe how to deal with power when determining the robustness of a prognostic study - dichotomous vs continuous

A
  • the more of the population we sample the more likely we are to have a population that is representative and the less likely you are to have a random sampling error
  • for dichotomous need at least 10 events for every factor (the lower the event rate the larger the study you need)
  • for continuous need 10-15 participants per factor (continuous outcomes will be smaller studies bc all of the data is going to contribute to that prediction)
29
Q

describe whether one should dichotomize continuous predictors

A
  • should avoid doing so bc it spends our degrees of freedom!
  • if we are trying to predict outcome which is on a continuous scale (say 0-100) and we have a risk factor also on a scale (from 0-100) there will be some error for both of these but if one is predictive of the other, we will be able to place it on the other one
  • if we start dichotomizing based on the error, they might not fit as well or have to fit much better in order to see he relationship, so it doesn’t help us out to dichotomize our predictors (might help us in the clinic though) - can be more accurate in saying which level or how much is more predictive
30
Q

describe whether one should combine predictors

A
  • if two things are predictive of each other, it makes sense to combine them (instead of getting more predictors) given that the number of predictors will increase or sample size –always aim for less predictors
31
Q

explain pre-weighted/well-known predictors

A
  • If we have predictors that are already well known, we would probably pre-assign that weight instead of having computer assign the weight for us, so not part of that putting it in and taking it out computer thing
32
Q

explain adjusted R2 (shrinkage)

A
  • Adjusted R2: takes into consideration some of the possibility around the random sampling error
  • so if we make a pie chart of all the variability associated with the outcome, R2 is telling us how well this model predicts variability
  • R2 of 1 perfectly predicts variability of the outcome (ie all variability in our outcome is perfectly explained by our model)
  • as you get closer to 0, less of our pie is being explained by our model, which is not good
  • so our adjusted R2 will possibly reduce/be a smaller number than the unadjusted R2 because the adjusted R2 has taken into account some of that risk of random sampling error
33
Q

look at example of “predictor” rule of thumb

A
  • slide 24
  • 10 events/predictor is fine (when we get to 10, R2 varies by little even with increased predictors), but more is better
  • note that it is broader (more possible R2 values) with fewer predictors
34
Q

for prognostic studies, how do you determine how precise the estimates are?

A
  • look at the 95% CI around the weights and the R2 (look at slide 19 and 24)
  • want to see narrow CIs which will happen with sufficient power
  • does the range within the CI provide certainty about what to expect?