4 - Therapys + RCTs Flashcards
what is the most important consideration in the design of any experiment?
- to minimize error
explain the association of power and error
- power = ability to statistically detect a diff btw groups when one exists (signal)
- error = unexpected variability within the outcome (noise)
- therefore finding a S.S. diff btw groups = signal/noise!
- little we can do about signal, much can be done about noise reduction!
what is total error made up of?
- systematic and random error
define systematic vs random error - how can it be prevented?
- systematic = bias, variability in the outcome that can be prevented or explained (threatens validity)
- random: variability in the outcome that cannot be explained (threatens precision and external validity)
- there will always be some amount of error
- prevented through design or removal during analysis
how can we increase power? (3)
- decrease noise
- increase signal
- lower standards (increase willingness to accept t1 error)
describe the conventional design of an RCT
- p 52
describe importance of control groups and 2 types of control groups
- important for controlling threats to internal validity (whether change happened due to chance or it would have happened anyways?)
- no-treatment control and standard-of-care control
what is the standard of care control group
- provides all medical treatment to all participants
- less statistically powerful (delta is smaller)
- may be more ecologically valid
- can’t compare with placebo if another valid standard of care exists (ethics)
what is the no treatment control group
- may be limited to treatments with wait lists
- more statistically powerful than standard of care control group (delta is bigger)
- bigger signal, increased power, decreased n-size requirements
name some features that limit bias in a design
balance of prognostic factors
- randomization
- allocation concealment
- blinding
- standardization of protocol
- intent-to-treat analysis
- completeness of follow-up
describe the balance of prognostic factors
- prognostic means controlling for what you can up front or at the end through analytical methods
- look to make sure in the table (should be provided by researcher) that important characteristics are similar btw 2 groups
- note that there is no point in adding p values for comparing (redundant and use up our alpha)
- p 53
what are the 4 types of randomization?
- simple
- stratified
- blocked
- minimization
what is stratified randomization?
- separating samples into several subsamples to balance prognostic factors btw groups
- ie males and females - each group separated equally among tx and ct
- good for smaller studies
what is blocked randomization?
- controls allocation of participants so there is an equal distribution of participants btw groups
- blocks are multiples of the number of groups you have (ie if you have 3 groups, block size can be 3, 6, or 9)
- good for smaller studies
what is minimization randomization?
- calculated imbalance within each prognostic factor should the patient be allocated to a particular treatment group. various imbalances added together to get overall study imbalance. patient is assigned to treatment group that would minimize the imbalance.
- uses computer algorithm
what is allocation concealment?
- person making decision about patient eligibility is unaware of which group they are assigned to until decision about eligibility has been made
- internal validity error!
what is selection bias?
- systematic errors in the measurement of the effect of treatment due to differences btw those who are selected and those who are not
- internal validity error!
how do we implement allocation concealment? how does inadequate allocation concealment affect results?
- note that allocation concealment can ALWAYS be done!
- safeguard allocation procedure before and until allocation is done
- use web-based (best), central call-in, independent source, envelopes, etc
- trails where AC was inadequate demonstrate on average larger treatment effect (30-40%)
what is simple randomization?
- like flipping a coin
- good for larger studies
define blinding and how important it is
- study participants (including clinician, patients etc) are unaware of the group to which patients are assigned
- importance depends on subjectivity of the outcome (eg just life/death is very objective, but cause of death is becoming more subjective - as soon as we step away from extremely objective, need blinding)
- cannot always be implemented
- safeguards randomization sequence AFTER allocation
how does lack of blinding affect results?
- trials where blinding was inadequate demonstrate on average 17% overestimation of treatment effect
what is a detection bias? how to prevent?
- you find something bc you are expecting it
- prevent with tests and tet frequency being the same!
- in terms of clinician blinding, when different testing or frequency of tests/seeing the patient occurs
- p 58
what is interviewer bias
- greater probing of an interviewer of some participants than others related to group
what is expectation bias
- interviewers knowledge of group influences their expectation of finding an outcome (act surprised, etc)
what is an observer-expectancy effect?
- when an experimenter subtly affects the participant, causing them to behave congruently with his behavior
what is a subject-expectancy effect?
- when a participant acts in a certain way due to their expected outcome
define placebo effect
- effect of treatment independent of its biological effect
describe the demand characteristics
- faithful participant role: trying to follow instructions exactly
- good participant role: trying to determine the research hypothesis and confirm them
- negative participant role: trying to determine the research hypothesis and disconfirm them
- apprehensive participant role: participant is concerned of researcher’s opinion of them and changes behaviour accordingly
look at blinding data analyst slide
- p 56
what do we think about measuring blinding success?
- not really necessary bc even if blinding is successful participants/experimenters may still demonstrate expectancy bias
what is contamination vs co-intervention?
- contamination: individuals within the control group obtain the treatment outside the experiment (increase noise, decrease power) - ie receive the other groups treatment (for one or both groups) - get diluted signal (ITT) or biased results (non-ITT)
- co-intervention: individuals within the control group obtain effective treatments other than the treatment under study (delusion of results, don’t know what caused what)
- prevent w rules about timing and types of co-interventions permitted, record keeping and post hoc adjustment for imbalances
- for both prevent with standardizing protocol, same freq and test type
what is intention to treat analysis (ITT)?
- aka “as randomized” analysis
- patients analyzed within their allocated group (not according to what they received), whether or not they received, were adherent to, or completed the protocol
- minimizes t1e (preserves prognostic balance of group)
- can contribute to t2e bc of contamination
what are exceptions to ITT? aka when is it less of a threat to validity to exclude patients from the analysis? and how are they removed?
- patients without the disorder, patients never eligible for participation (accidentally entered through error)
- must be removed w independent adjudication committee blind to group allocation before randomization!
why does missing data threaten validity and does inflating n size fix the problem?
- data are rarely missing for trivial reasons
- no! - people who drop out cannot be replaced (bc you will get someone who is not the same as the people who dropped out)
- want to see less than 20% missing overall, if % missing is diff btw groups may be due to treatment!
define the 3 types of missing data
- MCAR - does not depend on observed or unobserved outcome (ie car breaks down) - can threaten precision (smaller n-size) but not validity of study!
- MAR - depends on variables unrelated to outcome (storm keeps all patients from Toronto from coming)
- MNAR - related to the outcome (patient does not want to return bc they got better or worse) no imputation methods are appropriate for dealing w this type of data!
- note that data missing completely at random only influences precision and any other types of missing data threaten t1e AND precision
describe excluding all patients w missing data method
- easy default to most statistical packages
- increase t2e
- threatens internal validity (does not follow ITT) - this increases possibility of t1e!
- decreases precision (bc of decreasing n-size)
describe assuming worst/best-case scenario method
- assume worst for treatment/best for control (increasing t2e!)
- not appropriate for longitudinal data w missing mid-point data (bc there are points on either side of the missing data)
- may be overly conservative
- usually used when we don’t know what happened at the end of a study
describe last outcome carried forward method
- may be too conservative
- not appropriate for longitudinal w missing mid-point
- trajectory of change is ignored
describe growth curve analysis
- good for mid-point data
- requires at least 2 data points
- not appropriate for missing endpoint data
- decreases variability ie CI (increases chance of t1e)
describe regression methods
- allows for examination of longitudinal trends
- decreases variability ie CI (increases chance of t1e)
- where did the avg person score at 4 weeks for example
- added precision, reduced noise
- more missing data = worse
describe multiple imputation methods
- allows for longitudinal trend examination
- using more than one method, which adds come variability to data (decreasing t1e)
describe mixed model analysis
- for when no actual time is assigned
- fixed and random
- time recorded as “days since intervention” not by fixed visit interval
- can’t be used for endpoint data
- this is recommended bc it leaves all patients in analysis without making assumptions does not reduce variability too much (ie by inputting avg values for missing data points)
is the validity of the study threatened by missing data?
- imputation should only ever improve precision, validity should not change! if it does, this is an n-size issue (not big enough) - ie conclusion should not change!
- see p 61