Lec 2 Flashcards
The aim of critical appraisal is to understand…
the strengths, weaknesses, and potential for bias in research before you apply it (e.g. in practice).
What were the examples of biases covered in class? (just the names)
Dig Sand Maybe Really Play Right design, selection/sample, measurement, response, performance, reporting bias
What is design bias?
Failure to account for inherent biases. A specific example of design bias in therapy research could be related to the structure of the therapy sessions in the intervention. For instance, if a study comparing two therapeutic approaches schedules more sessions for one treatment group than for the other, this would create a structural design bias.
What is selection/sampling bias?
Non representative (but may not want to if you’re for example doing research on interventions on dementia)
What is measurement bias?
An inaccurate or poorly designed measure (is your scale validated?)
What is response bias?
Consciously or subconsciously responding according to what the experiments ‘wants’ or ‘expects to find’
What is performance bias?
when participants or researchers act differently because of allocation in control or experimental group
What is reporting bias?
Errors, or bias toward publication of positive results
What are three questions to ask when critically appraising evidence (venn diagram)?
Is the evidence valid?
Is the evidence applicable/relevant?
Is the evidence clinically significant / clinically important?
What are four questions you can ask to critically appraise a study?
Does the study address a clearly focused question?
Did the study use valid methods?
Are the valid results of this study important?
Are these valid, important results applicable?
When was CONSORT first published?
1996 (updated 2001)
What is a systematic review?
A systematic review summarises the results of available carefully designed healthcare studies (controlled trials) and provides a high level of evidence on the effectiveness of healthcare interventions.
Judgments about the evidence (from several trials) can inform recommendations for healthcare.
These reviews are complicated and depend largely on what clinical trials are available, how they were carried out (the quality of the trials) and the health outcomes that were measured.
What exactly is a meta-analysis (as in how does it differ from a systematic review)?
Review authors pool numerical data about effects of the treatment
What are the steps in a systematic review?
- Determine the research question (should be fairly narrow - otherwise too many papers to examine)
- Assemble the research team
- Determine if there are any registered (in process) or published systematic reviews on your topic
- Develop and register the protocol of your study
- Develop a comprehensive search strategy, informed by your inclusion and exclusion criteria
- Select studies for inclusion based on the predefined criteria
- Extract and analyse the data
- Interpret and synthesise results for publication
- Update review as required
What are two screening questions to ask when appraising a systematic review?
Did the review address a clearly focused question?
Did the authors look for the right type of papers?
Applying the results to your client incorporates:
(5 things)
- some assessment of their individual baseline risk
- judgement about whether the evidence can be extrapolated to your client
- understanding of factors that may increase the benefits or harm their experience
- considering what are my client’s values and preferences?
- asking can this practice be implemented in this setting?
What are the benefits of qualitative research, what does it add? (5 things)
- Can provide insights into a problem
- Provides rich data from individuals’ experiences
- Qualitative research can fill a gap or complement [quantitative studies]
- Helps to gain a rich understanding of underlying reason, opinions and motivations (e.g. for a treatment or effect).
- Qualitative methods may also determine patients’/clients’ experience of, and with, an intervention. If this is negative (e.g., side effects of medication, behavioural intervention too time consuming) it will impact on treatment uptake.
What is the efficacy of treatment (as opposed to effectiveness) and what kind of validity is it’s focus?
The efficacy of treatment is determined by a clinical trial or trials in which many variables are carefully controlled to demonstrate that the relationship between the treatment and outcome are relatively unambiguous.
Efficacy studies emphasise internal validity of the experimental design
Controlling the types of patients included in the study (e.g. limiting comorbidity)
◦ Using manuals to standardise treatment delivery
◦ Training and monitoring therapists
◦ Controlling the number of treatment sessions
◦ Random assignment to conditions and the use of blinding procedures for raters.