Final Flashcards
What is Evidence-Based Practice (EBP)
involves a combination of the best available evidence, the clinical circumstances, and patient/client needs, values and circumstances (puzzle pieces) 1. Clinical expertise 2. Best Research Evidence 3. Patient Values and Preferences
Steps for EBP (5 in a circle)
- Ask question
- Search
- Critically appraise
- Implement
- Evaluate
Interdependent Processes: (triangle)
Research
evidence-based practice
Critical Appraisal
(look at slide)
How do you use research in Practice?
Ask a clinical question- a least 2 approaches
- Find individual articles on the topic
-Search the literature
-select/exclude articles
-Extract data in the articles
-Critically appraise the retained articles
-Summarize results and make evidence-informed clinical decisions - Read articles that combine information
Reviews-systematic and/or scoping
-Clinical practice guideline s
Literature review (3 points)
-Consist of organized summaries of evidence
Are an efficient way to access a specific body of research
-Facilitate the exploration or similarities and difference between individual studies
Narrative, scoping systematic chart
Compare: Question
Narrative: Often broad in scope; often >1 research question
Scoping: Often broad in scope, to explore the breadth of a topic
Systematic: Usually a single focused research question
Narrative, scoping systematic chart
Compare: Sources and Searches
Narrative: -Not usually specified
-Potentially biased
Scoping:
Comprehensive
-Strategy explicitly stated
-Reproducible
Systematic:
Comprehensive
-Strategy explicitly stated
-Reproducible
Narrative, scoping systematic chart
Compare: Number of Independent Reviewers
Narrative: One
Scoping: At least two to do the most important steps independently (e.g. study selection); a third reviewer will solve disagreements
Systematic:
At least two to do the most important steps independently (e.g. study selection); a third reviewer will solve disagreements
Narrative, scoping systematic chart
Compare: Study Selection
Narrative:
Not usually specified-Potentially biased
Scoping:
Inclusion/exclusion of study design could be somewhat flexible
Systematic:
- Explicit inclusion and exclusion criteria
- Uniformly applied
Narrative, scoping systematic chart
Compare: Study Appraisal
Narrative:
Variable
Scoping:
Critical appraisal not required, sometimes conducted
Systematic:
Rigorous critical appraisal (risk of bias)
Narrative, scoping systematic chart
Compare: Synthesis
Narrative:
Qualitative summary common
Scoping:
Qualitative synthesis
Systematic:
Qualitative synthesis +/-
-Quantitative
-Meta-analysis
Narrative, scoping systematic chart
Compare: Conclusions/Inferences
Narrative:
Conclusion reinforces author’s’ thesis/sometimes evidence-based
Scoping:
Conclusion is identification of parameters or gaps in body of knowledge/ inferences based on evidence
Systematic:
- Conclusion involves consideration of quality of studies
- Evidence-based
Narrative, scoping systematic chart
Compare: Reporting
Narrative:
Lack of transparency in reporting of processes
Scoping:
Explicit reporting of processes
Systematic:
Explicit reporting of processes
The main similarities between conducting scoping and systematic review include (1 point)
Transparent and systematic methods for search strategies and reporting
The main differences between conducting scoping and systematic reviews include (6 points)
the nature of the question
- the nature of the collected data
- the fact that critical appraisal is conducted or not
- the type of data synthesis
- weather or not consultation is conducted and, the need for updating
Levels of Evidence (5 layers)
The commonly thought-of levels of evidence are an important resource when looking at typical therapeutic interventions:
- Background information/Expert Opinion
- Observational Descriptive Designs: Cross sectional studies, case studies
- Observational Analytic Designs: Cohort Designs, Case controlled Studies
- Randomized Control Trials
- Systematic Reviews, and Meta-Analyses
Overview of Producing a systematic Review (3 steps)
Many steps are involved in doing a systematic review, and an emphasis on reducing bias in each step.
- Search, identify, select, and critically appraise relevant research.
- Collect and analyse/combine individual study data.
- Minimize bias at each step of the review. Each step should be performed by two independent researchers who will discuss and evaluate the results upon completion. In the case of a disagreement, a third researchers should be consulted.
Steps for Conducting a Systematic Review: (9 steps)
- Define the research question
- Establish eligibility criteria (inclusion and exclusion criteria)
- Search articles using rigorous strategy
- Select articles based on eligibility criteria
- Data extraction
- Assess studies for risk of bias to determine methodological quality
- Synthesize the results in a quantitative way, an may or may not perform qualitative synthesis
- Interpret results and draw conclusions
- Improve and update review
Steps for Conducting a systematic review vs EPB steps
- Define the research question
- Search articles using rigorous strategy
- Assess studies for risk of bias to determine methodological quality
view
What is Critical Appraisal
Process of systematically examining evidence to assess` its validity, results and relevance before using it to inform a practice or policy decision
Critical Appraisal Tools can broadly classified into
- research design-specific
2. Generic
Reserach design-specific
contain items that address unique methodological issues to a specific research design
–> This precludes comparison of the quality of different designs
Generic
aims to enhance the ability of research consumers to synthesis evidence from a range of qualitative and or qualitative study design
Most frequently assessed in the RCTs appraisal tool Data Analyses:
Whether appropriate statistical analysis was performed where a sample size justification or power calculation was provided and whether side effects of the intervention were recorded and analysed
Most frequently assessed in the RCTs appraisal tool Data Blinding:
Whether the participant, clinician and assessor were blinded to the intervention
Some critical Appraisal tools for RCT
- Jadad Scale
- Cochrane Collaboration risk of bias assessment tool
- PEDro Scale
- CASP RCT checklist
- -> 11 Questions based on Users’ Guide
Reporting a RCT
CONSORT statement for reporting RCT
Jadad Scale
old one Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials -only 3 questions -easy to apply should not take more than 10 minutes -end up with a score
Cochrane Collaboration’s tool for assessing risk of bias Includes:
5 categories: Selection bias performance bias detection bias attrition bias reporting bias other bias Scored: low, unclear , high risk of bias --> no score, subjective --> hard to apply if you don't have a great understanding
Cochrane Collaboration’s tool for assessing risk of bias : Selection of Bias:
-random sequence generation
Allocation concealment
Cochrane Collaboration’s tool for assessing risk of bias
Perfromance Bias
-Blinding of participants and personnel
Cochrane Collaboration’s tool for assessing risk of bias
Detection Bias
Blinding of outcome of assessment
Cochrane Collaboration’s tool for assessing risk of bias :
Attrition bias
Incomplete outcome data
(people do not finish intervention therefore not included
causes biases)
Cochrane Collaboration’s tool for assessing risk of bias:
Reporting Bias
Selective reporting
Cochrane Collaboration’s tool for assessing risk of bias: Other bias
Where you put any other information
PEDro Scale
Scoring PEDro scale: 0 to 10 -Interpreted by stroke Engine *high quality = PEDro score 6-10 *fair Quality = PEDro score 4-5 *poor Quality = PEDro score <3 Many people use the score - BUT it is important to note th items that are scored as absent when considering the nature of potential bias
There is training it is online, trained by the organization
PEDro Scale
11 questions
advantages: common, easily to apply, like the score, quick
Disadvantage: score misleading,
PEDro Scale:
Eligibility criteria were specified
External validity (this score doesn't count at the end)
PEDro Scale:
Randomly Allocated
- The precise method need not be specified
- Quasi-randomization methods do not satisfy this criterion
PEDro Scale: Allocation Concealment
The practice of keeping researchers and participants unaware of the sequence, with the goal of preventing the researchers from influencing the group assignment of participants
- Proper randomization depends on the implementation of adequate allocation concealment
- Sometime you need to read carefully to infer this from methods section
PEDro Scale: Groups equivalent at baseline regarding the most important prognostic indicators
key outcomes/condition severity
PEDro Scale : Blinding of all subjects
PEDro Scale: Blinding of assessors measuring at least one key outcome
-NB: intrials in which key outcomes are self-reported (eg., visual analogue scale, pain diary), the assessor is considered to be blind if the subject was blind
PEDro Scale : Measures of at least one key outcome obtained from at least 85% of allocated subjects
-Number allocated and assessed should be reported at least for 1 follow up time point
PEDro Scale : All subjects received allocated treatment, or data for at least one key outcome was analyzed by “intention to “treat”
ITT Vs Per protocol analysis
PEDro Scale : Results of between group statistical comparison are reported for at least one key outcome
Comparison between groups is reported
PEDro Scale : Point estimates and measures of variability are reported for at least one key variable
-Point measure is a measure of the size of the treatment effect (mean difference, regression coefficient, etc. ) with a variability measure (eg. Standard Deviation)
Why is it important to analyze according to ITT?
(intention to Treat IIT) analysis
*random allocation results in group equivalence at baseline usually. Intention to treat analyses are done to avoid the effects of crossover and dropout, which may break the random assignment to the treatment groups in a study
–> Reasons for drop-outs may be due to intervention, especially differential drop-outs
–> Maintains sample size (statistical power) - ideal if all subjects comply and complete
ITT analysis may result in conservative estimate of treatment effect due lack of adherence to protocol
ITT achieves the best estimate if subjects have received the allocated interventions ad there has been minimal loss to follow-up
Per Protocol (PP) analysis
only subjects who adhered to the protocol i.e., who completed the interventions and outcome assessments, are included in the analysis
-Some risk of bias
How would we summarize literature and incorporate critical appraisal results :
incorporate Quality - high quality vs low quality studies –> these relate to confidence in the finding
Level of Evidence
Levels of evidence are not the same as critical appraisal
- -> they indicate the study designs that are best suited to answer a specific research question
- Indicate the version you are using, since the table keeps changing
levels of Evidence Pyramid
Top:
Systematic Reviews and Met analyses of RCT
randomized control trials
Observational Analytical Design: cohort Design
Observational Descriptive Design: Cross sectional, case studies
Background Information/Expert Opinion: Bottom
Clinical Practice Guidelines (CPGS)
systematically developed statements to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances. They should take into consideration/patient client values and preferences
Enhance clinical judgement
CPGs are intended to enhance, not replace, clinical judgement and expertise. It is believed that their use will improve process and outcomes of care for patients/clients. For the healthcare system, it is believed that their use will decreased practice variation and optimize resource utilization
Guide Practice (CPG)
CPGS are used a a practice guide; they are not a “standard of care” and are not legally required for minimum of care (except if they become part of standard of care)
Provide Synthesis and Recommendations
Provide a synthesis of a topic and recommendations for practice. As such, they are developed based on existing systematic reviews or require that systematic reviews be conducted as an initial step. If there is not enough literature for a systematic review, individual research studies are cited. In the absence of even individual studies, expert opinion consensus may form the basis of recommendations
Are there CPG Summaries?
Some GPGs are very long and informative, as they may include all aspects of prevention, assessment, and management of a particular condition. Sometimes a quick reference guide, decision rule or prediction rule, or other summary of the guideline is available to make a guideline more accessible for use in clinical practice.
How are CPGs Developed: Preparing for CPG Development:
Involves selecting a topic or condition, determining the scope of the CPG, identifying and adapting existing CPGs, forming an interdisciplinary guideline development group, and involving consumers of the guideline
How are CPGs Developed:Systematic Reviewing of Literature
involves establishing specific clinical questions that will be addressed in the guideline, systematically searching, including or excluding identified research, and appraising research (critical appraisal).
How are CPGs Developed: Drafting the CPG
involves developing recommendations, developing an implementation strategy, consulting on the draft CPG, and writing summary version of the CPG
How are CPGs Developed: Reviewing the CPG
requires planning for evaluating the impact, revising and updating the CPG
How will you know if CPGs are valid
- The validity of CPGs is based on the use of systematic literature reviews in their development
- Explicit links between the recommendations for Practice and scientific evidence are required
- Having international, national, or regional guideline development groups that include representatives of all key disciplines and consumer of the CPGs, enhance validity
- To maintain validity and currency, there should be periodic review - and updating- of the CPG every few years
How do CPGs Help Busy Clinicians
- They provide a resource for which the strength of evidence is already evaluated (critically appraised)
- They provide Recommendations for Practice that are based on evidence and endorsed by the development team
Adapting CPGs for clinical Practice: CPGS may:
- Be adapted or created to be setting-specific
- Be integrated into documentation as an aid to documenting
- How accompanying clinical practice aids (e.g. clinical pathways or algorithms) that help organize the sequence and time of events of care. There may be clinical algorithms that are complex instructions for addressing a particular issue, such that decisions and consequences are conditional, branching, and/or logical
Users’ Guides for Practice Guidelines contain three key questions
- Are the results of the study valid?
- What were the results?
- Will the results help in caring for my patients?
The Agree II instrument what is it
most widely used and accepted critical appraisal tool for CPGs, as it is methodologically rigorous and validated. It is often used to compare CPGs if the several exist on the same topic. In addition, it provides direction for appropriate development and dissemination of guidelines
What are the six domains of the AGREE II
- Scope and purpose
- Stakeholder involvement
- Rigour of development
- Clarity of presentation
- Applicability
- Editorial Independence
each domain is rated on a 7 point Likert scale for 1= strong disagree to 7 = strongly agree
-each of the 6 domains are to be scored independently and not combined into overal score, as the domains measure different concepts
Evidence-to-practice gaps
- Research consistently producing new findings, but findings won’t change patient outcomes unless they are adopted
- variability between recommendations and clinical practice, across different settings and clinical conditions
- ->Common across different settings and clinical conditions
- ->Multiple studies around the world report that, on average, only 50-60% of patients receive evidence-based recommended care*
Evidence to practice delays
only an estimated 14% of research activity is translated into improvement in patient outcomes, and the translation takes an average of 17 years
All breakthrough, no follow through
- For every dollar allocated to develop breakthrough treatments, one cent is allocated to ensure that patients actually receive them
- Much of the US 240 billion/year worldwide investment in biomedical and healthcare research is wasted due to implementation failures
Evidence-to-practice-gaps-PT example
Mikhail et al, 2005: Physical therapists’ use of interventions with high evidence of effectiveness in the management of a hypothetical typical patient with acute LBP
- 68% of PTs used interventions with strong or moderate evidence of effectiveness
- 90% used interventions with absence of evidence of effectiveness
CIHR knowlegde translation definition
A dynamic and interactive process that includes the synthesis, dissemination, exchange and ethically sound application of knowledge to improve health, provide more effective health services and products and strengthen the healthcare system
Alternate Terms used to Describe KT
- Knowledge transfer
- Knowledge Uptake
- Research Utilisation
- implementation
- Innovation
- Quality improvement
- Knowledge Management
- Implementation
- Improvement
- dissemination and Implementation
- Knowledge to Action
- Knowledge Mobilisation
- Research Translation
What knowledge should be translated?
Individual studies???
- Rarely by themselves provide sufficient evidence
- May be misleading due to bias in conduct or random variation in findings
- High Quality up to date synthesis of knowledge
- Cochrane systematic reviews
- Evidence-based clinical practice guidelines
Knowledge-to-Action-Cycle
Knowledge inquiry Knowledge synthesis Knowledge Tools/Products Identify problem Determine the know/do gap Identify, Review, Select Knowledge -Adapt knowledge to local context -Assess Barriers/Facilitators to knowledge Use Select, Tailor, Implement Interventions Monitor Knowledge Use -Evaluate Outcomes Sustain Knowledge Use
1) Knowledge creation funnel
- Knowledge Inquiry (primary research studies report inconsistent effect)
- Synthesis (e.g systematic reviews)
- Knowledge Tools and Products (e.g. clinical guidelines)
Action Cycle
- Identify problem (where is the gap? background work and environmental scans to understand the local: e.g population, providers, practice environment, gaps re:evidence-based
- Adapt knowledge to local context (e.g. language and cultural adaption)
- Assess Barriers/Facilitators to knowledge Use (e.g lack of skills to assess, referral problems, policy problem)
- Select, Tailor, Implement Interventions (e.g training health providers, redesigned service delivery)
- Monitor Knowledge Use
- Evaluate Outcomes
- Sustain Knowledge Use
Barriers to the uptake of research into practice:
Barriers could be categorized into:
- Structural (e.g financial disincentives)
- Organisational (e.g. inappropriate skill mix, lack of facilities or equipment )
- Peer group (e.g local standards or care not in line with desired practice)
- Individual (e.g. knowledge, attitudes, skills)
Barriers to the uptake of research into practice-common barriers
- Barriers vary depending on context, but common barriers include:
- patient expectations
- Lack of knowledge and/or skills
- Lack of time
- Practitioner pr practice culture i.e resistance to change
- not enough definitive evidence to guide practice
- lack of skills for searching, appraising, and interpreting
- Lack of incentives
- Relevant literature not compiled all in one place
What can change practice
- Knowledge translation (implementation) interventions
- -> complex interventions designed to change clinical behaviour (organisational, practitioner, patient/consumer or policy) for instance:
- interventions targeted at healthcare organizations
e. g organizational culture - interventions targeted at healthcare workers
- -> Audit and feedback; Educational meeting; Educational outreach
- ->Interventions targeted at specific types of practice, conditions or settings
How do you keep up to date
Read (an critically appraise) regularly
- Journal articles, especially systematic reviews
- Evidence-based clinical practice guidelines
- Journal clubs
- Attend (evidence-based) conferences
- Not all up to you:
- -> systems need to ensure regular review of your own performance and you must be willing to change