Key Sources Flashcards
Petticrew, 2011
Moderator analyses carry important implications for the ‘equity’ of social intervention effects; key topic in public health
Richard Peto
Expressed misgivings about moderator analyses in trials:
o “Only one thing is worse than doing subgroup analyses – believing the results”
Rothwell, 2005; Wang and Ware, 2013
To prevent ‘cherry picking’ of results we need explicit pre-specification of hypotheses—confirmatory or exploratory—plus rationale
Thompson and Higgins, 2005
Limitations of analyzing moderators in systematic reviews:
o Power still low: although total Ns may be higher, subgroups are coded at trial level, meaning all variability within trials of participant characteristics is masked
Lipsey, 2003
Limitations of analyzing moderators in systematic reviews:
o Moderators still confounded
Leijten et al., 2013
Examined how socioeconomic status (SES) moderator effects were confounded by other risk factors, such as problem severity, attempting to overcome the issues surrounding confounding of moderators that are common to moderator analyses
Bonell et al., 2012
We can derive mediator hypotheses from theory, or from qualitative methods; e.g., users’ views on process
The importance of theory in evaluation:
o RCTs provide relatively simplistic tests of theories – controlled design requires relatively few assumptions (inputs, outcomes)
o This approach to evidence generation is often orientated towards accreditation of interventions rather than tests of causal theories
o But for social interventions causal pathways may not be straightforward and allocation to interventions may be uncontrolled
o We need to know how interventions work
Moore et al., 2014
Importance of process evaluations:
o “In order for evaluations to inform policy and practice, emphasis is needed not only on whether interventions ‘worked’ but on how they were implemented, their causal mechanisms, and how effects differed from one context to another”
‘Black box’ critique:
o “…the reader is left with data on whether or not an intervention works, but little insight into what the intervention is”
Freedman, 1987
Equipoise (a key feature of RCTs): Genuine uncertainty about the relative merits of the treatments being compared; unethical otherwise
Drummond et al., 2005
Naming alternatives and explicitly considering them in an economic evaluation:
o …what are the alternatives?
o …what is the perspective?
o …what does the economic evaluation tell us that an ‘educated guess’ won’t?
McCord, 2003
In her examination of the negative outcomes of the Cambridge-Somerville Youth Study and a host of other crime prevention programmes shown to ultimately cause harm, Joan McCord advocates an approach to assessing social interventions that takes into account more than mere efficacy, also looking into the safety measures and possible iatrogenic effects of a programme, her aim being to demonstrate that simply asking whether an intervention ‘works’ fails to adequately capture crucial considerations related to propensity for harm
In large part due to its uniquely sustained efforts in keeping records on the life outcomes of its participants, the Cambridge-Somerville Youth Study occupies a foundational role in the ever-expanding literature on the subject of social interventions that harm
However, the continued impact of the Cambridge-Somerville Study cannot solely be attributed to the shocking findings of its follow-up inquiries, which determined that those boys at risk of juvenile delinquency assigned to the treatment group died an average of five years earlier than their counterparts in the control group, and were at greater risk of eventually receiving a serious mental health diagnosis
Rather, this study continues to wield such influence because it so clearly exemplifies the increasingly recognized fact that conscientious study design, sufficient funding, proper execution, and the best of intentions do not necessarily culminate in an ‘effective’ intervention, or act as sufficient deterrents against undesirable outcomes—the potential for harm must be considered from the outset, and any hypothesized ‘harm-inducing’ mechanisms scrupulously monitored throughout the duration of the intervention
Merton, 1936
Alongside limitations in the existing state of knowledge, ignorance constitutes a driving factor of ‘unexpected consequences of conduct’ in purposive social action, as defined by Robert K. Merton
Lorenc, 2014
Theo Lorenc (2014) proposes the following typology of harm:
o Direct harms, in which the outcomes desired are directly associated with adverse effects
o Psychological harms, in which an intervention yields negative mental health impacts
o Equity harms, in which an intervention worsens existing social inequalities
o Group and social harms, in which harm is generated by the singling out or bringing together of a certain group
o Opportunity harms, in which, by favoring a particular intervention over others, we forfeit claim to any potential benefits associated with alternative courses of action
Cook, 2000
“It should be possible to construct and justify theory-based form of evaluation that complements experiments…It would prompt experimenters to be more thoughtful about how they conceptualise, measure, and analyse intervening processes. It would also remind them of the need to first probe whether an intervention leads to changes in each of the theoretically specified intervening processes…”
Cook and Campbell, 1979
“From Popper’s work, we recognize the necessity to proceed less by seeking to confirm theoretical predictions about causal connections than by seeking to falsify them. For Popper, the process of falsification requires putting our theories into competition with each other”
Typology of validity:
Enumerates four distinct yet complementary components of validity to be heeded in social research settings: (1) internal validity, (2) external validity, (3) construct validity, and (4) statistical conclusion validity
Martinson, 1974
In the absence of theories: null findings
o “What works? Questions and answers about prison reform” The Public Interest, 35, 22-54
o Review of rehabilitative interventions for reducing recidivism
o Widely interpreted as “nothing works” in prison rehabilitation
o Led some to criticize the investment of resources in prisoner rehabilitation
o But led others to criticize the methodological status quo and ask why rehabilitative programmes don’t work
Coryn et al., 2011
Definition #1 of theory-based evaluation:
o “Theory-driven evaluation is…any evaluation strategy or approach that explicitly integrates and uses stakeholder, social science, some combination of, or other types of theories in conceptualizing, designing, conducting, interpreting, and applying an evaluation”
Weiss, 2000
Definition #2 of theory-based evaluation:
o “It helps to specify not only the what of a programme outcomes but also the how and the why. Theory-based evaluation tests the links between what programmes assume their activities are accomplishing and what actually happens at each step along the way”
Stockwell and Gruenewald, 2004
Theory-based evaluation example: Limiting the physical availability of alcohol to reduce alcohol-related harm
o “Efforts to control alcohol availability to reduce alcohol-related harms have been based on the view that ‘less is best’; i.e. the less alcohol available the better for public health and safety”
“Availability theory” – 3 related propositions:
- (1) The greater the availability of alcohol, the higher the average consumption of alcohol
- (2) The higher the average consumption, the greater the number of excessive drinkers
- (3) The greater the number of excessive drinkers, the greater the prevalence of health and social problems
Prevention might include:
- Placing restrictions on number of premises (i.e., spatial availability)
- Placing restrictions on the times at which alcohol can be sold (i.e., temporal availability)
- Placing restrictions on consumption for population groups (e.g., minimum legal drinking age)
Humphreys and Eisner, 2014
Found no real effect of the Licensing Act (2003) in achieving its aims – the Act did not cause bars and nightclubs to change their trading hours a great deal
Craig et al., 2008
“Process evaluations, which explore the way in which the intervention under study is implemented, can provide valuable insight into why an intervention fails or has unexpected consequences, or why a successful intervention works and how it can be optimised. A process evaluation nested inside a trial can be used to assess fidelity and quality of implementation, clarify causal mechanisms, and identify contextual factors associated with variation in outcomes”
Updated guidance from the Medical Research Council (MRC) emphasizes the importance of conducting process evaluations within intervention trials, stating these evaluations “can be used to assess fidelity and quality of implementation, clarify causal mechanisms and identify contextual factors associated with variation in outcomes”
“Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a series of pilot studies targeted at each of the key uncertainties in the design, and moving on to an exploratory and then a definitive evaluation”
Moore et al., 2015
“An intervention may have limited effects either because of weaknesses in its design or because it is not properly implemented. On the other hand, positive outcomes can sometimes be achieved even when an intervention was not delivered fully as intended. Hence, to begin to enable conclusions about what works, process evaluation will usually aim to capture fidelity (whether the intervention was delivered as intended) and dose (the quantity of intervention implemented). Complex interventions usually undergo some tailoring when implemented in different contexts. Capturing what is delivered in practice, with close reference to the theory of the intervention, can enable evaluators to distinguish between adaptations to make the intervention fit different contexts and changes that undermine intervention fidelity”
“Complex interventions work by introducing mechanisms that are sufficiently suited to their context to produce change, while causes of problems targeted by interventions may differ from one context to another. Understanding context is therefore critical in interpreting the findings of a specific evaluation and generalising beyond it. Even where an intervention itself is relatively simple, its interaction with its context may still be highly complex”
Baron and Kenny, 1986
VALUE OF MEDIATOR ANALYSES
o Operating under the basic assumption that a causal relationship exists between an intervention and an observed outcome—an assumption made reasonable in the context of RCTs by the process of randomization—we can conceptually frame a mediator as an intervening variable on the causal pathway of an intervention, responsible for somehow shaping the relationship between stimulus and response
Kraemer, Wilson, Fairburn, and Agras, 2002
VALUE OF MEDIATOR ANALYSES
o By striving to fully appreciate “the mechanisms through which treatments operate,” we find ourselves better equipped to maximize treatment effectiveness, and simultaneously more able to reduce the monetary and human costs associated with a given treatment—“Active therapeutic components could be intensified and refined, whereas inactive or redundant elements could be discarded”