Key Sources Flashcards
Petticrew, 2011
Moderator analyses carry important implications for the ‘equity’ of social intervention effects; key topic in public health
Richard Peto
Expressed misgivings about moderator analyses in trials:
o “Only one thing is worse than doing subgroup analyses – believing the results”
Rothwell, 2005; Wang and Ware, 2013
To prevent ‘cherry picking’ of results we need explicit pre-specification of hypotheses—confirmatory or exploratory—plus rationale
Thompson and Higgins, 2005
Limitations of analyzing moderators in systematic reviews:
o Power still low: although total Ns may be higher, subgroups are coded at trial level, meaning all variability within trials of participant characteristics is masked
Lipsey, 2003
Limitations of analyzing moderators in systematic reviews:
o Moderators still confounded
Leijten et al., 2013
Examined how socioeconomic status (SES) moderator effects were confounded by other risk factors, such as problem severity, attempting to overcome the issues surrounding confounding of moderators that are common to moderator analyses
Bonell et al., 2012
We can derive mediator hypotheses from theory, or from qualitative methods; e.g., users’ views on process
The importance of theory in evaluation:
o RCTs provide relatively simplistic tests of theories – controlled design requires relatively few assumptions (inputs, outcomes)
o This approach to evidence generation is often orientated towards accreditation of interventions rather than tests of causal theories
o But for social interventions causal pathways may not be straightforward and allocation to interventions may be uncontrolled
o We need to know how interventions work
Moore et al., 2014
Importance of process evaluations:
o “In order for evaluations to inform policy and practice, emphasis is needed not only on whether interventions ‘worked’ but on how they were implemented, their causal mechanisms, and how effects differed from one context to another”
‘Black box’ critique:
o “…the reader is left with data on whether or not an intervention works, but little insight into what the intervention is”
Freedman, 1987
Equipoise (a key feature of RCTs): Genuine uncertainty about the relative merits of the treatments being compared; unethical otherwise
Drummond et al., 2005
Naming alternatives and explicitly considering them in an economic evaluation:
o …what are the alternatives?
o …what is the perspective?
o …what does the economic evaluation tell us that an ‘educated guess’ won’t?
McCord, 2003
In her examination of the negative outcomes of the Cambridge-Somerville Youth Study and a host of other crime prevention programmes shown to ultimately cause harm, Joan McCord advocates an approach to assessing social interventions that takes into account more than mere efficacy, also looking into the safety measures and possible iatrogenic effects of a programme, her aim being to demonstrate that simply asking whether an intervention ‘works’ fails to adequately capture crucial considerations related to propensity for harm
In large part due to its uniquely sustained efforts in keeping records on the life outcomes of its participants, the Cambridge-Somerville Youth Study occupies a foundational role in the ever-expanding literature on the subject of social interventions that harm
However, the continued impact of the Cambridge-Somerville Study cannot solely be attributed to the shocking findings of its follow-up inquiries, which determined that those boys at risk of juvenile delinquency assigned to the treatment group died an average of five years earlier than their counterparts in the control group, and were at greater risk of eventually receiving a serious mental health diagnosis
Rather, this study continues to wield such influence because it so clearly exemplifies the increasingly recognized fact that conscientious study design, sufficient funding, proper execution, and the best of intentions do not necessarily culminate in an ‘effective’ intervention, or act as sufficient deterrents against undesirable outcomes—the potential for harm must be considered from the outset, and any hypothesized ‘harm-inducing’ mechanisms scrupulously monitored throughout the duration of the intervention
Merton, 1936
Alongside limitations in the existing state of knowledge, ignorance constitutes a driving factor of ‘unexpected consequences of conduct’ in purposive social action, as defined by Robert K. Merton
Lorenc, 2014
Theo Lorenc (2014) proposes the following typology of harm:
o Direct harms, in which the outcomes desired are directly associated with adverse effects
o Psychological harms, in which an intervention yields negative mental health impacts
o Equity harms, in which an intervention worsens existing social inequalities
o Group and social harms, in which harm is generated by the singling out or bringing together of a certain group
o Opportunity harms, in which, by favoring a particular intervention over others, we forfeit claim to any potential benefits associated with alternative courses of action
Cook, 2000
“It should be possible to construct and justify theory-based form of evaluation that complements experiments…It would prompt experimenters to be more thoughtful about how they conceptualise, measure, and analyse intervening processes. It would also remind them of the need to first probe whether an intervention leads to changes in each of the theoretically specified intervening processes…”
Cook and Campbell, 1979
“From Popper’s work, we recognize the necessity to proceed less by seeking to confirm theoretical predictions about causal connections than by seeking to falsify them. For Popper, the process of falsification requires putting our theories into competition with each other”
Typology of validity:
Enumerates four distinct yet complementary components of validity to be heeded in social research settings: (1) internal validity, (2) external validity, (3) construct validity, and (4) statistical conclusion validity
Martinson, 1974
In the absence of theories: null findings
o “What works? Questions and answers about prison reform” The Public Interest, 35, 22-54
o Review of rehabilitative interventions for reducing recidivism
o Widely interpreted as “nothing works” in prison rehabilitation
o Led some to criticize the investment of resources in prisoner rehabilitation
o But led others to criticize the methodological status quo and ask why rehabilitative programmes don’t work
Coryn et al., 2011
Definition #1 of theory-based evaluation:
o “Theory-driven evaluation is…any evaluation strategy or approach that explicitly integrates and uses stakeholder, social science, some combination of, or other types of theories in conceptualizing, designing, conducting, interpreting, and applying an evaluation”
Weiss, 2000
Definition #2 of theory-based evaluation:
o “It helps to specify not only the what of a programme outcomes but also the how and the why. Theory-based evaluation tests the links between what programmes assume their activities are accomplishing and what actually happens at each step along the way”
Stockwell and Gruenewald, 2004
Theory-based evaluation example: Limiting the physical availability of alcohol to reduce alcohol-related harm
o “Efforts to control alcohol availability to reduce alcohol-related harms have been based on the view that ‘less is best’; i.e. the less alcohol available the better for public health and safety”
“Availability theory” – 3 related propositions:
- (1) The greater the availability of alcohol, the higher the average consumption of alcohol
- (2) The higher the average consumption, the greater the number of excessive drinkers
- (3) The greater the number of excessive drinkers, the greater the prevalence of health and social problems
Prevention might include:
- Placing restrictions on number of premises (i.e., spatial availability)
- Placing restrictions on the times at which alcohol can be sold (i.e., temporal availability)
- Placing restrictions on consumption for population groups (e.g., minimum legal drinking age)
Humphreys and Eisner, 2014
Found no real effect of the Licensing Act (2003) in achieving its aims – the Act did not cause bars and nightclubs to change their trading hours a great deal
Craig et al., 2008
“Process evaluations, which explore the way in which the intervention under study is implemented, can provide valuable insight into why an intervention fails or has unexpected consequences, or why a successful intervention works and how it can be optimised. A process evaluation nested inside a trial can be used to assess fidelity and quality of implementation, clarify causal mechanisms, and identify contextual factors associated with variation in outcomes”
Updated guidance from the Medical Research Council (MRC) emphasizes the importance of conducting process evaluations within intervention trials, stating these evaluations “can be used to assess fidelity and quality of implementation, clarify causal mechanisms and identify contextual factors associated with variation in outcomes”
“Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a series of pilot studies targeted at each of the key uncertainties in the design, and moving on to an exploratory and then a definitive evaluation”
Moore et al., 2015
“An intervention may have limited effects either because of weaknesses in its design or because it is not properly implemented. On the other hand, positive outcomes can sometimes be achieved even when an intervention was not delivered fully as intended. Hence, to begin to enable conclusions about what works, process evaluation will usually aim to capture fidelity (whether the intervention was delivered as intended) and dose (the quantity of intervention implemented). Complex interventions usually undergo some tailoring when implemented in different contexts. Capturing what is delivered in practice, with close reference to the theory of the intervention, can enable evaluators to distinguish between adaptations to make the intervention fit different contexts and changes that undermine intervention fidelity”
“Complex interventions work by introducing mechanisms that are sufficiently suited to their context to produce change, while causes of problems targeted by interventions may differ from one context to another. Understanding context is therefore critical in interpreting the findings of a specific evaluation and generalising beyond it. Even where an intervention itself is relatively simple, its interaction with its context may still be highly complex”
Baron and Kenny, 1986
VALUE OF MEDIATOR ANALYSES
o Operating under the basic assumption that a causal relationship exists between an intervention and an observed outcome—an assumption made reasonable in the context of RCTs by the process of randomization—we can conceptually frame a mediator as an intervening variable on the causal pathway of an intervention, responsible for somehow shaping the relationship between stimulus and response
Kraemer, Wilson, Fairburn, and Agras, 2002
VALUE OF MEDIATOR ANALYSES
o By striving to fully appreciate “the mechanisms through which treatments operate,” we find ourselves better equipped to maximize treatment effectiveness, and simultaneously more able to reduce the monetary and human costs associated with a given treatment—“Active therapeutic components could be intensified and refined, whereas inactive or redundant elements could be discarded”
Clark, 1997
VALUE OF MEDIATOR ANALYSES
o Notably, when evidence emerged to suggest that cognitive behavioral therapy (CBT) works in treating panic disorders via the eradication of catastrophic thoughts related to bodily changes—a mediating mechanism—the cognitive theory of panic gained greater empirical substantiation
Lorenc et al., 2013
VALUE OF MODERATOR ANALYSES
o Famously, a preponderance of evidence strongly suggests that media campaigns targeted at reducing cigarette use prove most effective at achieving their aim among the socioeconomically well-off, thereby widening already established inequalities
Supplee et al., 2013
VALUE OF MODERATOR ANALYSES
o Moderator analyses also present a means by which to capitalize on the increasing interest among policymakers in promoting more targeted, highly tailored interventions, with the initial investment incurred in developing a well-founded knowledge base of what works for whom seen as more than justified by the resulting ability to maximize efficiency and minimize risk
o The considerable influence wielded by intervention research pertaining to the topic of subgroup analysis extends to “policy decisions around programmatic aims (e.g., Upward Bound), funding decisions (e.g., Even Start), and new initiatives targeting funding towards evidence-based programs (e.g., teen pregnancy and home visitation)”
EXAMPLE: Studio Schools
What are Studio Schools (briefly)?
o New model of education in England
o Aim to contextualize learning and make it more practical
o Outcomes are engagement with education and employability of young people (14-19 years old)
Specifying the components of Studio Schools o Core: - Project-based learning - Personal coaching sessions - Work placements - Small school environments - Longer school day and year o Allowable: - Opportunities to start a business or project - Self-study units - Taught subject lessons
EXAMPLE: WHO ‘Parenting for Lifelong Health’ - Parenting interventions to reduce risk of child maltreatment in low and middle income countries (LMICs)
South Africa: The Sinovuyo Caring Families Programme for Parents of Children Aged 2-9 Years
To test feasibility using mixed methods:
- Dosage/exposure
- Programme fidelity
- Participant satisfaction, cultural feasibility
- Pilot RCT
Pilot program delivered to 56 parents in 4 groups
Parent interviews at participants’ home:
- Random sample (intervention, n=11; control, n=4)
- 1 hour; trained research assistants with interpreter
Facilitator focus groups at center
- 2.5 hours, conducted by Lachman in English
- Post-program (intervention, n=8; control, n=6)
Interview protocols with open-ended approach:
- Acceptability of program content, delivery methods
- Changes observed at home by parents
- Training, supervision, and logistical support
Kaminski et al., 2008
IDENTIFYING CORE COMPONENTS
Meta-regression: linking components to intervention effect sizes
o Interventions that DO include component X versus interventions that DO NOT include component X
o Strengths:
- Based on dozens of studies
- Results do not hinge on single intervention ‘brand’ or trial
- Can include many different types of components
o Limitations:
- No causality: only association between components and outcomes
- Results depend on patterns of combinations of components in existing programmes
Collins, 2016
IDENTIFYING CORE COMPONENTS
Factorial experiments
o “The process of identifying the intervention that provides the highest expected level of effectiveness obtainable…Within key constraints imposed by the need for efficiency, economy, and/or scalability.”
Dwan et al., 2008
Critical appraisal: what limitations of moderator analyses in RCTs?
o Cherry picking – evidence that reporting bias common in main effect analyses of trials – more so when it comes to secondary analyses?
The continued prevalence of outcome reporting bias in primary (i.e., main effect) analyses of trials, as highlighted by the work of Dr. Kerry Dwan and colleagues (2008), raises concerns that such ‘cherry picking’ of favorable results may abound to an even greater extent in secondary analyses of mediators and moderators
Assmann, Pocock, Enos, and Kasten, 2000
Critical appraisal: what limitations of moderator analyses in RCTs?
o Note with apprehension the prohibitively low statistical power of a great deal of secondary analyses, finding that “[m]any [major clinical trial] reports put too much emphasis on subgroup analyses that commonly lacked statistical power”
Brown et al., 2013
Pooling data
o Scientific benefits of sharing, collaboration between many investigator teams = better science
o Climate now is right: big push from funders, journals, governments to share data to increase transparency, reduce fraud (NIH, Ben Goldacre, BMJ AllTrials campaign)
o Example: NIMH Collaborative Data synthesis for Adolescent Depression Trials
Pooling data from a number of trials can address the lack of statistical power common to secondary analyses, while also providing for greater generalizability across contexts
Blankenship et al., 2006
What are structural interventions?
o “Structural interventions differ from many public health [and social] interventions in that they locate, often implicitly, the cause of public health [and social] problems in contextual or environmental factors that influence risk behavior […] rather than in characteristics of individuals who engage in risk behaviors”
o Structural interventions aim to change “social, economic, political or physical environments”
o This is different from approaches that focus on the individual, because the underlying assumption is that people’s context constrains their autonomy and affects how they make choices and act on those decisions
Blankenship et al., 2000
A framework when thinking about structural interventions: o Three types of contextual factors: 1. Availability 2. Acceptability 3. Accessibility
o Three levels at which structural interventions are targeted:
- Individual
- Organizational
- Environmental
Availability
o Focus on “behaviors, tools, equipment, materials, or settings that are necessary”
o Can address a lack of beneficial resources or an abundance of harmful resources
Acceptability
o Changing (altering) social norms
o Risk is “[…] partially determined by [society’s] values, culture and beliefs, or those of subgroups within it”
Accessibility
o Availability does not necessitate accessibility: “[…] the ability to avail oneself of [tools etc.] may be restricted by lack of resources and power”
o “[…] is a function of social, economic and political power and resources”
Examples of structural interventions for HIV:
- Comprehensive sex education with access to male and female condoms
- Syringe exchange programs
- Healthcare availability
- Stable housing
Bronfenbrenner, 1977
Microsystem
o The relationship between the individual and their proximal environment (“immediate setting”)
o “(e.g., home, school, workplace, etc.). […] The factors of place, time, physical features, activity, participant, and role constitute the elements of a setting”
Mesosystem
o The relationships between the main environments that an individual interacts/lives in
o “for an American 12-year-old, the mesosystem typically encompasses interactions among family, school, and peer group; for some children, it might also include church, camp, […] In sum, stated succinctly, a mesosystem is a system of microsystems”
Exosystem
o Extension of the mesosystem and does not contain the individual
o It is the social structures around the individual and other lower levels that influence the environment (“immediate settings”)
o “These […] encompass, among other structures, the world of work, the neighborhood, the mass media, agencies of government (local, state, and national), the distribution of goods and services, communication and transportation facilities, and informal social networks”
Macrosystem
o “A macrosystem refers to the overarching institutional patterns of the culture or subculture, such as the economic, social, educational, legal, and political systems, of which micro-, meso-, and exo-systems are the concrete manifestations”
“[…] environmental structures, and the processes taking place within and between them, must be viewed as interdependent and must be analyzed in systems terms.”
Adimora and Auerbach, 2010
SOCIAL DETERMINANTS OF HEALTH – HIV IN THE U.S.
53% of new HIV infections occur in men who are gay and men who have sex with men (MSM)
Homophobia and negative attitudes towards MSM are evident in the political and legal context:
o Policies on sexual behaviours (e.g., sodomy)
o Policies on relationships (e.g., marriage)
o “These restrictions tend to marginalize and exclude gay people and drive their relationships underground. Thus, many MSM do not publicly identify (or self identify) as ‘gay,’ or seek HIV prevention and sexual health information services targeted to gay communities. Internalized homonegativity has been associated with unprotected anal intercourse, a major route of HIV transmission, particularly for gay and other MSM”
~12% of new HIV infections occur among people who inject drugs:
o Lack of access to sterile needles and syringes
o Lack of access to addiction treatment programmes:
- Until Jan 2010, ban on federal funds being used to support syringe exchange programmes
- Political and ideological factors – criminalizing drug use, ‘war on drugs’
HIV prevalence higher among people who are poor:
o Lack of access to healthcare
o Increased risk of exposure to crack cocaine
o Risk of transactional sex for drugs
o Unstable housing, homelessness = increased risky behaviours
Fraser, 2009
Developing theory – sometimes called ‘problem theory’
o “Problem theory is a portrayal of the individual and environmental factors – both risk inducing and risk suppressing (i.e. protective) – that give rise to a problem or that sustain a problem over time. We use problem theory to identify leverage points for intervention”
o Incorporates risk and protective factors
Weiss, 1995
The role of theories in policy and practice
o “In a sense, all policy is theory. A policy says: if we do A, then B (the desired outcomes) will occur. As evaluative evidence piles up confirming or disconfirming such theories, it can influence the way people think about issues, what they see as problematic, and where they choose to place their bets. The climate of opinion can veer and wiser policies and programs become possible”
Lipsey, 1993
What had gone wrong in prisoner rehabilitation research?
o Adherence to particular methods stifled creativity in developing appropriate methods
o Excessive use of black-box methods underrepresents complexity
Medical Research Council (MRC) Framework
MRC Framework, stage 1: Development of the intervention
o Identifying the evidence base – background lit on nature, prevalence, risk and protective factors, intervention effects (use/do SRs)
o Identifying or developing theory [underlying the problem]
o Modelling process and outcomes [for the intervention]
MRC Framework, stage 2: Feasibility and piloting
o Testing procedures [for the intervention and all research processes]
o Estimating likely resources, recruitment and retention
o Participant acceptability and satisfaction
o May include pre-post study, or small RCT
o Can be qualitative and/or quantitative
MRC Framework, stage 3: Evaluation
o Assessing effectiveness
o Understanding change process
o Assessing cost effectiveness
MRC Framework, stage 4: Implementation
o Dissemination
o Surveillance and monitoring
o Long term follow-up