Final Clinical Trial Flashcards
- Explain what the difference between basket, umbrella, and platform trials.
Basket Trial
-Test new therapy on multiple disease (common in cancer trials) that have common predicted biomarkers/patients characteristics that can be used to predict whether a patient will respond to a specific intervention as the unifying eligibility criteria
Umbrella trials
-Test multiple targeted interventions for a single disease that is stratified into subgroups by a predictive biomarker or other patient characteristics
Platform trials (known as multi-arm multi stage : MAMS) -Allow for the intervention to be dropped or added new intervention during the trial, evaluate several interventions against a common control group, can be perpetual.
- Be able to identify and/or explain what a SMART trial is
Sequential Multiple Assignment Randomized Trial is a form of factorial experimental design involves dynamic treatment regimes, this design builds optinal adaptive intervention. rationale is there maybe genetic and non-genetic factors that affect a patient’s course and co-occurring disorders may arise, high patient heterogeneity in response to tmt
At each stage, look at responders vs non-responders, and can do further randomization into increase intervention
- What is the advantage of multiple imputation versus single imputation methods?
Multiple imputation is a method for averaging the outcomes across multiple imputed datasets, inflates standard error to account for the variances. Compared to single imputation, it takes into account the uncertainty in the imputation process
Single imputation: underestimate SE, ignores imputation variation
- Be able to explain the Last Observation Carried Forward method for missing data imputation and its main limitation.
Last observation carried forward method results in decreased variance and in the context of longitudinally collected outcome, LOCF ignore the trend overtime, underrepresenting the group
- Be able to reassign missing values in a table according to best-case and worst-case analyses.
This is an example of sensitivity analysis for missing data.
Best case analysis: all missing in control group to include in dead and all missing in Tx group in alive
Worst case analysis: all missing in control to Alive and all missing in Tx to DEAD.
If the three results are consistent, the results are clear
(e.g., if all three results, alpha is above 0.05, you can report as not significant results, negative trial)
If the missing data is small or observed difference is very large, then the results will be consistent
- Be able to define and give examples of the various missing data mechanisms:
1) Missing Completely at Random
Probability of missing is independent of all other variables
e.g., Probability of recording income is the same for everyone regardless of age (x) and income level
2) Missing at Random
Probability of missing does not depend on the value of Y after controlling for other variables (x)
e.g., Probability of recording income varies according to age groups, but does not vary according to the income of the respondents within the age group
3)Missing Not at Random
Probability of missing DEPENDENT on the value of missing value.
e.g., Probability of recording income varies according to income within each group
- What is the advantage of alpha spending designs over group sequential designs?
Alpha spending design is more flexible and less cumbersome than group sequential because it does not require to pre-specify the number of interim analyses or when to do the analysis as it spend alpha as a continuous function of the information time (alpha spending function)
Group sequential method:
- Restrictive monitoring times, need to specify the time point
- Requires equal increments of information (number of patients)
- Sometimes causes administrative difficulties with respect to timing of Data Safety Monitoring review of data à need more flexibility for the number of interim looks and when they occur
- Describe what the differences are between the (1) Haybittle & Peto, (2) Pocock, and (3) O’Brien and Fleming sequential boundaries are with respect to the likelihood of stopping early and for the final look.
(1) Haybittle & Peto (Harder to stop early, save alpha at the end): -Use large critical value (Z~3) for ALL interim tests, and then use conventional value 1.96 for the last look
(2) Pocock (Easy to stop early, lose alpha at the end): Constant throughout, final look, lower power. Best to use if expect strong effect earlier on
(3) O’Brien and Fleming (Hard to stop earlier, save alpha at the end): Really hard to reject at the beginning but becoming easier, at the end you save alpha. This method reduce the bias
- Describe the primary factors that go into the decision to terminate a study.
Early Effect: Sufficient evidence show that treatment is effective/different, to make continuing unnecessary or unethical
Futility: Sufficient evidence show that treatment is not effective/different, to make continuing unnecessary or unethical
Risk/Benefit Ratio: Side effect is too sever that continuing is unethical
Slow accrual
External reasons including other trial already showed early effect/futility, clinical landscape may change, new toxicity came into light
- Describe economic reasons for conducting interim analyses.
Economic Reason for conducting interim analyses including:
- Early stopping for negative results prevents wasting of resources
- Allows for informed management decisions: such as allocation of limited funds
- Describe administrative reasons for conducting interim analyses.
Administrative Reasons
- Ensure that the protocol is conducted as planned
- Ensure that the only appropriate subjects are being enrolled
- Identify unanticipated problem (such as protocol non-compliance, non-adherence)
- Ensure that the assumptions made in designing study is still being held such as enrollment rate, sample size parameters)
- Describe some ethical reasons for conducting interim analyses.
Ethical reason to conduct interim analyses
- Detect benefit or harm early on
- Ensure safety of participants
- Ensure participants are not necessary exposed to ineffective treatment
- What is the role of testing of interactions in subgroup analyses?
Testing of interaction protects from making type 1 error
Considering about linear model, test for interaction (b treatment*gender), if NOT significant then don’t look inside, if significant then examine treatment effect in each group.
- Describe what is meant by sub-group analyses.
In general, subgroup analysis is done to find group where treatment is effective and not effective (e.g., Does treatment work among men and women?)
Subgroup must be defined based on baseline characteristics (men/women, old/young), post baseline, it can be affected by the treatment.
Before you look, considering linear model, test for interaction first and if significant then look inside to protect from making type I error.
- Describe the possible effect of the dichotomization of a continuous outcome on the power of a trial.
Dichotomization normally decrease power because dichotomization lose information, therefore you would need bigger sample size
You want to keep continuous variable unless you have to categorize them due to the modeling issue. E.g., you have non-linear relationship, you have to categorize them
- Describe the possible/likely effect of dichotomization of a time to event outcome on the power of a trial.
Dichotomization of a time to event outcome normally decrease power due to losing information, therefore you would need bigger sample size
- Describe the intention-to-treat principle and Per protocol. Advantage and Disadvantage
Intention to treat
- All randomized participants will be included in the analysis according to the original assignment regardless of adherence to their assigned intervention.
- Advantage of Intention to treat is preserve randomization scheme, basis for correct inference
- Disadvantage if intention to treat is it does not minimize bias introduced by loss to follow-up, it is best to improve study design so that there is no loss to follow up
Per-protocol analysis
- Only include those who adhere to the original assignment. In this analysis, those who did not adhere to their assigned intervention, who loss to follow up, who was withdrawn from the trial will be excluded.
- Advantage: It better reflect the effect of treatments, useful for analysis of adverse events of treatment. Per-protocol analysis may be done as part of sensitivity analysis. It can be used to see if same conclusion will be drawn as intention-to treatment analysis.
- Disadvantage: Break randomization scheme, risk for bias, lower level of evidence (biased results)
- Describe why it’s reasonable to plan for testing non-inferiority and then superiority, but not the other way around.
Non-inferiority is the gate keeper. Similar to the ANOVA, if ANOVA is significant, then we go in and look in more detail.
Non-inferiority first because if you get significant result, it is okay to look at the superiority hypothesis.
If you do the superiority test first then inferiority test, you can detect superiority treatment but not a non-inferior treatment because non-inferiority treatment would fail in the first test and not go on the non-inferiority test.
If you do the inferiority test first, then superiority test, you can detect both non-inferior and superior treatment.
There is power issue, if delta 1 (non-inferiority margin) and delta 2 (superiority margin) are not equal, then doing sequential test, one of test would be underpowered due to the multiple testing issue