Final Clinical Trial Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q
  1. Explain what the difference between basket, umbrella, and platform trials.
A

Basket Trial
-Test new therapy on multiple disease (common in cancer trials) that have common predicted biomarkers/patients characteristics that can be used to predict whether a patient will respond to a specific intervention as the unifying eligibility criteria

Umbrella trials
-Test multiple targeted interventions for a single disease that is stratified into subgroups by a predictive biomarker or other patient characteristics

Platform trials (known as multi-arm multi stage : MAMS)
-Allow for the intervention to be dropped or added new intervention during the trial, evaluate several interventions against a common control group, can be perpetual.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Be able to identify and/or explain what a SMART trial is
A

Sequential Multiple Assignment Randomized Trial is a form of factorial experimental design involves dynamic treatment regimes, this design builds optinal adaptive intervention. rationale is there maybe genetic and non-genetic factors that affect a patient’s course and co-occurring disorders may arise, high patient heterogeneity in response to tmt
At each stage, look at responders vs non-responders, and can do further randomization into increase intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. What is the advantage of multiple imputation versus single imputation methods?
A

Multiple imputation is a method for averaging the outcomes across multiple imputed datasets, inflates standard error to account for the variances. Compared to single imputation, it takes into account the uncertainty in the imputation process

Single imputation: underestimate SE, ignores imputation variation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Be able to explain the Last Observation Carried Forward method for missing data imputation and its main limitation.
A

Last observation carried forward method results in decreased variance and in the context of longitudinally collected outcome, LOCF ignore the trend overtime, underrepresenting the group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. Be able to reassign missing values in a table according to best-case and worst-case analyses.
A

This is an example of sensitivity analysis for missing data.
Best case analysis: all missing in control group to include in dead and all missing in Tx group in alive
Worst case analysis: all missing in control to Alive and all missing in Tx to DEAD.
If the three results are consistent, the results are clear
(e.g., if all three results, alpha is above 0.05, you can report as not significant results, negative trial)
If the missing data is small or observed difference is very large, then the results will be consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Be able to define and give examples of the various missing data mechanisms:
A

1) Missing Completely at Random
Probability of missing is independent of all other variables
e.g., Probability of recording income is the same for everyone regardless of age (x) and income level

2) Missing at Random
Probability of missing does not depend on the value of Y after controlling for other variables (x)
e.g., Probability of recording income varies according to age groups, but does not vary according to the income of the respondents within the age group

3)Missing Not at Random
Probability of missing DEPENDENT on the value of missing value.
e.g., Probability of recording income varies according to income within each group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. What is the advantage of alpha spending designs over group sequential designs?
A

Alpha spending design is more flexible and less cumbersome than group sequential because it does not require to pre-specify the number of interim analyses or when to do the analysis as it spend alpha as a continuous function of the information time (alpha spending function)

Group sequential method:

  1. Restrictive monitoring times, need to specify the time point
  2. Requires equal increments of information (number of patients)
  3. Sometimes causes administrative difficulties with respect to timing of Data Safety Monitoring review of data à need more flexibility for the number of interim looks and when they occur
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Describe what the differences are between the (1) Haybittle & Peto, (2) Pocock, and (3) O’Brien and Fleming sequential boundaries are with respect to the likelihood of stopping early and for the final look.
A

(1) Haybittle & Peto (Harder to stop early, save alpha at the end): -Use large critical value (Z~3) for ALL interim tests, and then use conventional value 1.96 for the last look
(2) Pocock (Easy to stop early, lose alpha at the end): Constant throughout, final look, lower power. Best to use if expect strong effect earlier on
(3) O’Brien and Fleming (Hard to stop earlier, save alpha at the end): Really hard to reject at the beginning but becoming easier, at the end you save alpha. This method reduce the bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Describe the primary factors that go into the decision to terminate a study.
A

Early Effect: Sufficient evidence show that treatment is effective/different, to make continuing unnecessary or unethical

Futility: Sufficient evidence show that treatment is not effective/different, to make continuing unnecessary or unethical

Risk/Benefit Ratio: Side effect is too sever that continuing is unethical

Slow accrual

External reasons including other trial already showed early effect/futility, clinical landscape may change, new toxicity came into light

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. Describe economic reasons for conducting interim analyses.
A

Economic Reason for conducting interim analyses including:

  1. Early stopping for negative results prevents wasting of resources
  2. Allows for informed management decisions: such as allocation of limited funds
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. Describe administrative reasons for conducting interim analyses.
A

Administrative Reasons

  1. Ensure that the protocol is conducted as planned
  2. Ensure that the only appropriate subjects are being enrolled
  3. Identify unanticipated problem (such as protocol non-compliance, non-adherence)
  4. Ensure that the assumptions made in designing study is still being held such as enrollment rate, sample size parameters)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. Describe some ethical reasons for conducting interim analyses.
A

Ethical reason to conduct interim analyses

  1. Detect benefit or harm early on
  2. Ensure safety of participants
  3. Ensure participants are not necessary exposed to ineffective treatment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. What is the role of testing of interactions in subgroup analyses?
A

Testing of interaction protects from making type 1 error
Considering about linear model, test for interaction (b treatment*gender), if NOT significant then don’t look inside, if significant then examine treatment effect in each group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. Describe what is meant by sub-group analyses.
A

In general, subgroup analysis is done to find group where treatment is effective and not effective (e.g., Does treatment work among men and women?)

Subgroup must be defined based on baseline characteristics (men/women, old/young), post baseline, it can be affected by the treatment.

Before you look, considering linear model, test for interaction first and if significant then look inside to protect from making type I error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. Describe the possible effect of the dichotomization of a continuous outcome on the power of a trial.
A

Dichotomization normally decrease power because dichotomization lose information, therefore you would need bigger sample size

You want to keep continuous variable unless you have to categorize them due to the modeling issue. E.g., you have non-linear relationship, you have to categorize them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. Describe the possible/likely effect of dichotomization of a time to event outcome on the power of a trial.
A

Dichotomization of a time to event outcome normally decrease power due to losing information, therefore you would need bigger sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. Describe the intention-to-treat principle and Per protocol. Advantage and Disadvantage
A

Intention to treat

  • All randomized participants will be included in the analysis according to the original assignment regardless of adherence to their assigned intervention.
  • Advantage of Intention to treat is preserve randomization scheme, basis for correct inference
  • Disadvantage if intention to treat is it does not minimize bias introduced by loss to follow-up, it is best to improve study design so that there is no loss to follow up

Per-protocol analysis

  • Only include those who adhere to the original assignment. In this analysis, those who did not adhere to their assigned intervention, who loss to follow up, who was withdrawn from the trial will be excluded.
  • Advantage: It better reflect the effect of treatments, useful for analysis of adverse events of treatment. Per-protocol analysis may be done as part of sensitivity analysis. It can be used to see if same conclusion will be drawn as intention-to treatment analysis.
  • Disadvantage: Break randomization scheme, risk for bias, lower level of evidence (biased results)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. Describe why it’s reasonable to plan for testing non-inferiority and then superiority, but not the other way around.
A

Non-inferiority is the gate keeper. Similar to the ANOVA, if ANOVA is significant, then we go in and look in more detail.

Non-inferiority first because if you get significant result, it is okay to look at the superiority hypothesis.

If you do the superiority test first then inferiority test, you can detect superiority treatment but not a non-inferior treatment because non-inferiority treatment would fail in the first test and not go on the non-inferiority test.

If you do the inferiority test first, then superiority test, you can detect both non-inferior and superior treatment.

There is power issue, if delta 1 (non-inferiority margin) and delta 2 (superiority margin) are not equal, then doing sequential test, one of test would be underpowered due to the multiple testing issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  1. Be able to define what the margin of non-inferiority is. Also, be able to comment on the choice of this margin compared to an active control’s effect. For example, if an active control has a delta difference with placebo, in a non-inferiority trial for a new treatment compared to that active control, describe how the margin of non-inferiority may compare to this delta and the rationale.
A

Aim of non inferiority test is to test:

  1. New treatment is better than placebo
  2. New treatment is as good as active control with some additional benefits (lower cost, less toxicity, easier administration)

Let Delta 1 active control and placebo difference, and Delta 2 new treatment and active control difference, Delta 2 must be smaller than Delta 1.

Margin of non inferiority delta must be smaller than difference observed in superiority trial of active comparator to make sure new treatment is still better than placebo.
1/3 or 1/2 of the established superiority so that you can still say that new intervention is better than placebo and similar to active control with some additional benefit

20
Q
  1. Based on a figure of point estimates and confidence intervals, be able to explain whether one would conclude superiority, noninferiority, inconclusive, or inferior treatment.
A

Superiority: CI lies wholly to the left of null (e.g., 0), treatment better side, and delta (inferiority margin)

Noninferiority:

  1. CI lies WHOLLY to the left of Delta (inferiority margin), point estimate is in New Treatment Better side but include null (0), and therefore it is not superior
  2. CI lies WHOLLY to the left of Delta (inferiority margin), point estimate is in New Treatment Worst side and include null (0)
  3. Cl lies WHOLLY to the left of Delta (inferiority margin), point estimate is in New Treatment worst, WHOLLY to the right of null (o), VERY Rare and require large sample size

Inconclusive:

  1. Cl includes Delta (inferiority margin), point estimate is left of null, but include null thus difference is not significant, non-inferiority is inconclusive
  2. Cl includes Delta (inferiority margin), point estimate is right of null but include null thus not significant, non-inferiority is inconclusive
  3. CI includes Delta (inferiority margin), point estimate is the right of null, does not include zero so difference is statistically significant, but the result of non-inferiority is inconclusive

Inferiority:
1. CI lies WHOLLY to the right of inferior margin.. You can conclude new treatment is inferior

21
Q
  1. In cluster randomized trials, assume there is a positive correlation in the primary outcome for individuals within a cluster. Describe the impact on the alpha level if you ignore the design effect and analyze as if they are independent.
A

If you ignore the design effect, alpha rate will increase because variance will be biased downward.

Type I error will increase because variance will be biased downward (variance will be smaller than it should be), Z will be larger than it should be, p-value will be smaller than it should be.

22
Q
  1. Describe what the differences are in Type I and Type II errors between conventional designs and futility designs.
A

Conventional Design

  • Type I: Ineffective treatment is Effective
  • Type II: Effective treatment is ineffective

Futility Design

  • Type I: Effective Treatment is Ineffective
  • Type II: Ineffective Treatment is Effective
23
Q
  1. What are some specific features of futility designs that typically result in a smaller sample size compared to conventional efficacy designs?
A

Smaller size in futility design because

  • one sided
  • no control group (because using historical control or calibration control)
  • liberal alpha 0.10
24
Q
  1. Write down the null and alternative hypotheses in a futility trial.
A

Hypothesis are flipped, rejection means new treatment is futile do not go to phase III

For continuous outcome
Let Delta, the increase is considered a positive results and clinically meaningful.

Null: New treatment mean effect >= Control mean effect + Delta (clinically significant effect)
Alternative: New treatment mean effect < Control effect + Delta (clinically significant effect)

In the con of binary outcome
Let Delta, the reduction in failure considered clinically meaningful

Null: Proportion Expected to fail in new treatment group <= Proportion Expected to fail in Control + Delta (reduction in failure)
Alternative: Proportion Expected to fail in new treatment > Proportion Expected to fail in Control + Delta (reduction in failure)

25
Q
  1. What are some strengths and limitations of the Continuous Reassessment Model (CRM)?
A

Strength:

  1. More flexible and more information (historical control) used to decide the maximum torelate dose level
  2. Limit aggressive dose escalation because next cohorts are assigned to currently predicted best dose

Weakness:

  1. Require a statistician
  2. Require assumption about dose/toxicity relationship

Continuous Reassessment Model (CRM) is Bayesian model, mathematically control next dose level using posterior distribution. In CRM, prior distribution from historical control, expert knowledge, publication and likelihood from observed data are combined, then posterior distribution (prior + likelihood) is used to prediction decision.

26
Q
  1. What are some strengths and limitations of the 3+3 design?
A

Strength:

  1. Easy
  2. Straight forward

Limitation:

  1. Ignore dosage history other than the previous 3 patients cohort
  2. Imprecise and inaccurate MTD estimation
  3. Low probability of selecting true MTD
  4. High variability in MTD estimates
  5. Dangerous outcomes
27
Q
  1. In a 3+3 design, how is the maximum tolerated dose determined?
A

MTD is highest dose where 0 or 1 DLT is being observed
Step-up/Step-down (3+3) method is rule based dose escalation design, dose levels must be pre-specified
-Treat 3 participants at dose level K
-If no DLT, escalate to dose level K+1
-If 2 DLT, de-escalate to dose level K-1
-If 1 DLT, add 3 more participants and observe DLT. If 1 DLT out of 6, then escalate to K+1, if 2 DLT in 6 participants, de-escalate to dose level K-1

28
Q
  1. Describe what is meant by the efficacy/toxicity trade-off.
A

As dosage increases, in general, it is expected that efficacy will increase but so will toxicity. Thus there is a tradeoff between efficacy and toxicity as dosage increases and an “ideal” point at which a desirable efficacy is reached without overstepping a tolerable toxicity threshold.

29
Q
  1. Describe the primary goals of Phase 1 drug trials
A

Goal of Phase 1 is to determine the dose of the drug that is safe and most likely to show benefit. Finding the Maximum tolerated dose (Max dose before unacceptable toxicity is experienced)

30
Q
  1. Describe what Cohen’s effect size is and general guidelines for magnitude of effect sizes.
A
Cohen's D is general guideline for effect size 
D = μ1- μ2/s-pooled
Small=0.2
Medium=0.5
Large=0.8

Must have rationale-can use previous data/literature. Based on behavioral science data, essentially the same as a z-score for two sample test with s-pooled as SE

31
Q
  1. Explain and show what is the most efficient design in terms of allocation (balanced vs unbalanced) between two study groups. What is the effect of unbalanced designs?
A

Most efficient design is balanced allocation, (Optimal allocation is 50% in each group)

Unbalanced design:

  • Power driven by number in smallest group
  • 3:2 randomization requires 4% inflation
  • 2:1 require 12.5% inflation
  • 3.71: 1 require 50% inflation
32
Q
  1. Show what the risk is of not considering the clustering effect in sample size estimation for cluster randomized trials
A

You must take correlation (clustering) into account in computing sample size for cluster randomized study, otherwise you maybe underestimate variance (variance will be biased downward), standard error, underestimate required sample size, in turn, increase false positive

Depends on ICC (between cluster/between cluster and within cluster), higher the ICC, smaller the cluster SD

33
Q
  1. Show how an estimated sample size can incorporate an expected loss to follow-up rate.
A

Inflate sample size
Example, 15% loss to follow up AT random
N=N/(1-0.15)

If 15% loss to follow up NOT at random
N= N/(1-0.15)2

34
Q
  1. Identify how multiple comparisons can affect type I error and be able to show how a sample size calculation can incorporate a Bonferroni correction.
A

When you do the multiple comparison, you inflate type 1 error rate, and results in False Positive.
Bonferroni is used to adjust for alpha (alpha/# of tests) to avoid making type 1 error

We know that Z(1-alpha/2) appears in the numerator of our sample size equation, thus when we utilize Bonferroni Correction for alpha, our required sample size increase

35
Q
  1. Describe the role of blinding and types of blinding
A

Unblinded Trial (survival trial, device, lifestyle
modification trial)
-Ad: Easier to design and conduct, less expensive
-Disad: subject to biased results

Single blinded trial: Participant is blinded

  • Ad: similar to unblinded, easy to carry out
  • Disad: Investigator add bias b/c they are aware of treatment assignment

Double-blinded trial: Participants and investigators are blinded
Ad: reduce risk of bias, account for placebo effect
Dis: expensive (great effort needed to manufacture placebo)

Triple blinded trial: Participants, investigators, data monitoring committee are all blinded
Ad: reduce risk of bias
Dis: reduce DMC’s ability to monitor safety and efficacy

36
Q
  1. Describe the randomization process and different types of randomization procedures
A

Randomization tends to produce study group comparable with respect to known and unknown risk factors

Randomization removes bias

Randomization guarantees that statistical test will have valid significant level

  1. Fixed allocation randomization: assign intervention with pre-specified probability
  2. Simple randomization: toss unbiased coin
  3. Block randomization: guarantee equal sample size, robust to time trends
  4. Stratified randomization: stratify by major prognostic variables or risk factors to achieve balanced allocation within subgroup
  5. Adaptive randomization: Allocation probability is not fixed and continues to change as the study progresses, you can adjust allocation probability according to imbalances in numbers of participants
37
Q
  1. Identify the various types of control groups and their roles in different types of trials.
A

Placebo-same look and deliver method as treatment, but lacks an active ingredient

No Treatment: Control group has no treatment

Standard of care: control group receives the standard of care that is currently in practice

Another treatment: not necessarily considered as “standard of care”

Historical control: new treatment is used and compared to historical outcomes from database

Prospective registries as controls

38
Q
  1. Identify issues with incorporating baseline measures of the outcome into change from baseline as the outcome and be able to discuss alternative approaches.

What are three use of baseline data in a clinical trial?

A

If the baseline is related with the outcome can induce spurious correlation

% change is dependent on what the baseline value is (we can have the same percent increase, but magnitude of change is very different)

Adjusting for baseline values reduce variability

Misconception: using a subject as his/her own control eliminate the need for separate control group. This is not true, it ignores regression to the mean, if the baseline measure is extreme, then post intervention value is more likely to be less extreme

Misconception: Many time people forget to adjust for baseline value

  1. Describe the population that you can generalize to
  2. Determine if randomization was successful
  3. Used in statistical analysis to control for imbalances in groups
  4. Form basis for subgroup analyses
39
Q
  1. Be able to explain properties of good surrogate outcomes and what their limitations are
A

Good surrogate

  • Strongly associated with the definitive outcome
  • Part of the causal pathway
  • Yield the same inference as the definitive outcome
  • Responsive to the treatment
  • Short latency

Rationale of using surrogate outcome

  • Can be measured earlier
  • Easier and more convenient to measure
  • Observe more frequently
  • Less affected by other factors than the true

Limitation:
Surrogate outcome may or may not be a biomarker of clinical illness

40
Q
  1. Identify differences in outcomes and how they would be analyzed

Difference between Prognostic Variable vs Surrogate Variable

A

Primary outcome:

  • Main outcome to answer main research question
  • Sample size and power is based on
  • Must be able to be measured in ALL

Secondary Outcome:

  • Related to primary outcome
  • Exploratory outcome in which may help creating new hypothesis for future study
  • Not powered so cannot draw causal relationship

Surrogate outcome

  • Outcome that is measured in place of the “true: or “final” biological or clinical outcome
    e. g., Change in cholesterol instead of mortality

Prognostic variable: predicts the clinical outcome (e.g., age), surrogate variable predict the effect of intervention on the clinical outcome

41
Q
  1. Be able to explain how the eligibility criteria can affect generalizability and differences in results between phases of studies.
A

Critical distinction between phase II and phase III is having different goals.

Phase II study is assessing the efficacy, meaning that it is asking “Can the treatment work under controlled condition?”. Phase III is assessing the effectiveness of intervention, asking “Can the treatment work under normal condition”.

Phase II study often include highly selected individuals who are likely to respond to intervention, they are more homogenous population

Phase III study is pragmatic study including heterogenous population.

Estimated effect of treatment that was used for the sample size calulation maybe based on the different population with different eligibility criteria.
Parameter that was used to estimate sample size maybe from different population.

Thus significant results that was seen in phase II may not be significant in phase III because pool of population may be different (homo vs heterogeneous), parameter that was used in phase II and phase III maybe different, inclusion criteria used in phase II and phase III maybe different.

Many clinical trial were conducted in academic centers, population may be different from the general public.

42
Q
  1. Define Phase I-IV trials
A

Phase I (safety trial): goal is to determine which dose of the drug is safe and most likely to show benefit

Phase II (efficacy trial) :goal is to determine the agent with potential efficacy (usually not designed to definitively test efficacy). Discard agents without promise. Provide rationale for continuing the experiment

Phase III (effectiveness trial): goal is to find the best treatment that has an implication of changing the current practice in treating patients. Compare new Tx with standard Tx

Phase IV (post surveillance study): goal is to study long term effect, adverse effects after the treatment being applied to a large number of patients and follow up for a long period of time

43
Q
  1. Definition of a clinical trial
A

Clinical trial is a prospective study comparing the effect of intervention against control in human

44
Q
  1. Be able to identify different types of bias including their definitions
A

Bias is systematic error that leads to incorrect estimate of association.

Information bias: misclassification of information/data collected

Selection bias: participants having different probability of being included in the study sample based on exposure and outcome

  • it can be introduced by investigator (how you chose subjects
  • It can be introduced by participants (self selection)

Recall bias: bias that occurs when participants are asked to recall events in the past that they may recall differently depending on the outcome

Ascertainment bias: bias that occur when there is more intense screening for outcome among exposed individuals than among unexposed individuals

Confounding: variable that is associated with outcome and exposure, variable that is not in the causal pathway which distort the true relationship between exposure and outcome

Publication bias: Studies with significant results are more likely to be published than studies with insignificant results

45
Q
  1. Describe different types of studies and be able to rank in terms of strength of evidence
    Rank in terms of strength of evidence (strong to weak)
A
  1. Clinical Trial (A prospective study comparing the effect of intervention against control in human)
  2. Cohort study (observational study, identify group without the interest of outcome based on exposure status and follow in time. Goal is to compare the incidence of outcome, Advantage: cohort can be matched, cheaper than randomized trial, disadvantage (could take long time)
  3. Case Control Study (Define group based on the disease status, retrospectively look presence of exposure in case and control, measure of association is odds of exposure. Advantage (good for rare, less time. Disadvantage (recall bias, finding control can be challenging)
  4. Cross Sectional Study (Known as prevalence study, assess the point prevalence, good to study for trend)
  5. Case Series (Article that describe individual cases. Ad (can help in ID of new trend, help detect new side effect). Disadvantage (lack of generalizability, cannot draw association)
  6. Temporarily
  7. Strength
  8. Reversibility (if the cause is deleted then the effect should disappear as well)
  9. Dose-response
  10. Consistency (consistency finding observed by different persons in different place)
  11. Biological Plausibility (A plausible mechanism between cause and effect)
  12. Analogy (if similarities between the observed association and any other association)