RCTs Flashcards
Example of criterias for randomization that are not really, but only quasi-randomized?
- Date of birth (odd/even), record number, day of enrollment, alternating
No concealment!!
What is the goal of intention-to-treat analysis?
Preserve the balance of confounders, both measured and unmeasured to evaluate effectiveness (as opposed to efficacy).
In general, loss to follow-up is not random, but related to prognostic factors, which is why we need ITT as an unbiased estimate.
But careful - it’s also more conservative. Which can be both good and bad.
What is industry bias?
RCTs sponsored by the industry have more favourable results
What is the issue with surrogates outcomes?
Surrogate outcomes: biological or imaging markers that are thought to be indirect measures of the effect of treatment on clinical outcomes
They are misleading:
There are many examples of treatments that have had a major beneficial effect on a surrogate outcome, which had previously been shown to be correlated with a relevant clinical outcome in observational studies, but where the treatments have proved ineffective or harmful in subsequent large RCTs that used these same clinical outcomes
What are the barriers to RCT?
- Expensive/difficult
- Broad findings (can’t be applied to a single individual)
- Generalizability is bad
- Ethical issues of randomizing
- Time before results are out
Selection bias in RCT is due to?
- Possibly poor random sequence generation (are the groups comparable?)
- Allocation that is not perfectly concealed
Performance bias in RCT is due to?
Not double blinding, such that the hypothesis of the study is known
Detection bias in RCT is due to?
Not blinding the outcome assessment (i.e., the outcome assessor needs to not know the allocation)
Attrition bias in RCT is due to?
Incomplete outcome data, and not knowing the reason for attrition/exclusion
Reporting bias in RCT is due to?
Selective outcome reporting (authors are more likely to publish studies and selectively report outcomes that show statistical significance)
What is N=1 RCT? When and why do we use it?
When we have only one patient to observe, with the goal to determine the optimal intervention for this very patient.
Especially useful for:
- rare diseases
- comorbidities
- concurrent therapies
Requirements:
- Stable conditions
- Quick onset of action/termination of therapy
Why do we use cluster randomization RCTs?
When we fear contamination (or when it’s cheaper, or for administrative convenience)
What is the issue with cluster randomization and power?
Since the observations within a cluster are correlated, we need a larger N, depending on the ICC (correlation), which will be high if we had consistent practices.
What formula do we use to determine the N needed for are cluster randomization RCT?
1 + (n-1)*correlation –> is the design effect factor, by which we must multiply the N
where n is the average cluster size
What happens if we ignore cluster effect in cluster RCTs?
- We have extreme p-values and very narrow CI
- Leading to spurious significant findings
What are the assumptions of a factorial RCT?
- Independent effects of A and B (no interaction/synergy)
Why would we use a Active Control Trial?
- Ethic problems of using a placebo, especially when we have mortality or serious morbidity
- To make comparisons
What is the main difference in terms of goal between superiority trials and noninferiority trials?
Superiority trial: want to know if new tx is better than std tx (e.g., we want to develop a better tx)
Non inferiority trial: want to know if the new tx is as good as the std tx (e.g., we want to develop a less expensive tx or a tx with less side effects)
What is assay sensitivity?
The ability of a trial to distinguish an effective drug from an ineffective drug
How does the interpretation of SUP and N-INF trials differ?
SUP: entirely interpretable without additional assumptions or information
N-INF: dependent on knowing that the control had the expected effect, which may not be measured in the study since we have no placebo arm
What’s up with assay sensitivity in SUP and N-INF trials?
SUP: has, by definition, assay sensitivity
N-INF: whether or not it has assay sensitivity relies on external info, usually a historical control trial
- but even then, it could be undermined by poor compliance, non-protocol crossovers, spontaneous improvement, poorly responsive participants, and insensitive measures to drug effect
What is the margin (M1)?
the maximum acceptable extent of clinical noninferiority of an experimental treatment
Null hypothesis and alternative hypothesis FORMULA for non-inferiority trials
H0: C-T >= M1 (so the difference is larger than the threshold)
Ha: C-T
Non-inferiority trials use ….-sided tests
one
Null hypothesis of
a) Superiority trials
b) Non inferiority trials
a) HoSUP: There is no difference (not superior; doesn’t mean that they are equivalent though)
b) HoNInf: The new treatment is inferior to the standard*
*a noninferiority design statistically tests the null hypothesis that the experimental treatment is inferior by the equivalence margin.
Alternative hypothesis of
a) Superiority trials
b) Non inferiority trials
a) HaSUP: There is a difference
b) HoNInf: There is no difference
What’s up with non inferiority and equivalence trials?
Non inferiority trials are often called equivalence trials, but it’s a misnomer.
If the intent of a study is to demonstrate that differences between control and experimental treatments are not large in either direction, then it is known as an equivalence trial.
Non inferiority trials are one sided.
What would be a
a) Type I error
b) Type II error
in a superiority trial
a) Falsely concluding Ha: Conclude that there is a difference, when there really is none
b) Falsely concluding Ho: Conclude that there is no difference when there is one
What would be a
a) Type I error
b) Type II error
in a non-inferiority trial
a) Falsely concluding Ha: Conclude that the new treatment is noninferior (no difference) when the new treatment is actually inferior
b) Falsely concluding Ho: Conclude that the new treatment is inferior when there is no difference
How do we determine the margin?
- assumptions?
- any limits?
- From a random effects meta-analysis of previous placebo-controlled studies of the standard treatment
- A mix of stats and clinical judgment
Assumptions: constancy assumption, which is unverifiable. Which is why non-inferiority margin are often a fraction of the standard treatment effect to be preserved
Limits: Must be smaller than the expected difference between the new treatment and a placebo
In a graph, what if the 95% CI is situated:
a) Over the delta line?
b) Over the 0 line, but doesn’t cross the delta line?
c) After the 0 line?
d) Before the delta line?
a) Inconclusive
b) Non inferior
c) Superior
d) Inferior
What are randomized registries?
You randomize treatment, and you follow-up using existing electronic data source (big data)
- diminishes the costs greatly!
What are adaptive trials?
When you adjust the enrolment (by changing randomization rates, for example) as you go
- decreases the number of participants needed, gives greater flexibility
What are platform trials?
They don’t focus on one intervention but on 1 disease with multiple interventions
What are pragmatic trials?
Set in real-world setting (explanatory trials, on the other hand, are carefully controlled for)