Final Flashcards

1
Q

What is Evidence-Based Practice (EBP)

A
involves a combination of the best available evidence, the clinical circumstances, and patient/client needs, values and circumstances
(puzzle pieces) 
1. Clinical expertise 
2. Best Research Evidence 
3. Patient Values and Preferences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Steps for EBP (5 in a circle)

A
  1. Ask question
  2. Search
  3. Critically appraise
  4. Implement
  5. Evaluate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Interdependent Processes: (triangle)

A

Research
evidence-based practice
Critical Appraisal
(look at slide)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you use research in Practice?

Ask a clinical question- a least 2 approaches

A
  1. Find individual articles on the topic
    -Search the literature
    -select/exclude articles
    -Extract data in the articles
    -Critically appraise the retained articles
    -Summarize results and make evidence-informed clinical decisions
  2. Read articles that combine information
    Reviews-systematic and/or scoping
    -Clinical practice guideline s
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Literature review (3 points)

A

-Consist of organized summaries of evidence
Are an efficient way to access a specific body of research
-Facilitate the exploration or similarities and difference between individual studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Narrative, scoping systematic chart

Compare: Question

A

Narrative: Often broad in scope; often >1 research question

Scoping: Often broad in scope, to explore the breadth of a topic

Systematic: Usually a single focused research question

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Narrative, scoping systematic chart

Compare: Sources and Searches

A

Narrative: -Not usually specified
-Potentially biased

Scoping:
Comprehensive
-Strategy explicitly stated
-Reproducible

Systematic:
Comprehensive
-Strategy explicitly stated
-Reproducible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Narrative, scoping systematic chart

Compare: Number of Independent Reviewers

A

Narrative: One

Scoping: At least two to do the most important steps independently (e.g. study selection); a third reviewer will solve disagreements

Systematic:
At least two to do the most important steps independently (e.g. study selection); a third reviewer will solve disagreements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Narrative, scoping systematic chart

Compare: Study Selection

A

Narrative:
Not usually specified-Potentially biased

Scoping:
Inclusion/exclusion of study design could be somewhat flexible

Systematic:

  • Explicit inclusion and exclusion criteria
  • Uniformly applied
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Narrative, scoping systematic chart

Compare: Study Appraisal

A

Narrative:
Variable

Scoping:
Critical appraisal not required, sometimes conducted

Systematic:
Rigorous critical appraisal (risk of bias)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Narrative, scoping systematic chart

Compare: Synthesis

A

Narrative:
Qualitative summary common

Scoping:
Qualitative synthesis

Systematic:
Qualitative synthesis +/-
-Quantitative
-Meta-analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Narrative, scoping systematic chart

Compare: Conclusions/Inferences

A

Narrative:
Conclusion reinforces author’s’ thesis/sometimes evidence-based

Scoping:
Conclusion is identification of parameters or gaps in body of knowledge/ inferences based on evidence

Systematic:

  • Conclusion involves consideration of quality of studies
  • Evidence-based
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Narrative, scoping systematic chart

Compare: Reporting

A

Narrative:
Lack of transparency in reporting of processes

Scoping:
Explicit reporting of processes

Systematic:
Explicit reporting of processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The main similarities between conducting scoping and systematic review include (1 point)

A

Transparent and systematic methods for search strategies and reporting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The main differences between conducting scoping and systematic reviews include (6 points)

A

the nature of the question

  • the nature of the collected data
  • the fact that critical appraisal is conducted or not
  • the type of data synthesis
  • weather or not consultation is conducted and, the need for updating
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Levels of Evidence (5 layers)

A

The commonly thought-of levels of evidence are an important resource when looking at typical therapeutic interventions:

  1. Background information/Expert Opinion
  2. Observational Descriptive Designs: Cross sectional studies, case studies
  3. Observational Analytic Designs: Cohort Designs, Case controlled Studies
  4. Randomized Control Trials
  5. Systematic Reviews, and Meta-Analyses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Overview of Producing a systematic Review (3 steps)

A

Many steps are involved in doing a systematic review, and an emphasis on reducing bias in each step.

  1. Search, identify, select, and critically appraise relevant research.
  2. Collect and analyse/combine individual study data.
  3. Minimize bias at each step of the review. Each step should be performed by two independent researchers who will discuss and evaluate the results upon completion. In the case of a disagreement, a third researchers should be consulted.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Steps for Conducting a Systematic Review: (9 steps)

A
  1. Define the research question
  2. Establish eligibility criteria (inclusion and exclusion criteria)
  3. Search articles using rigorous strategy
  4. Select articles based on eligibility criteria
  5. Data extraction
  6. Assess studies for risk of bias to determine methodological quality
  7. Synthesize the results in a quantitative way, an may or may not perform qualitative synthesis
  8. Interpret results and draw conclusions
  9. Improve and update review
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Steps for Conducting a systematic review vs EPB steps

A
  1. Define the research question
  2. Search articles using rigorous strategy
  3. Assess studies for risk of bias to determine methodological quality
    view
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Critical Appraisal

A

Process of systematically examining evidence to assess` its validity, results and relevance before using it to inform a practice or policy decision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Critical Appraisal Tools can broadly classified into

A
  1. research design-specific

2. Generic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Reserach design-specific

A

contain items that address unique methodological issues to a specific research design
–> This precludes comparison of the quality of different designs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Generic

A

aims to enhance the ability of research consumers to synthesis evidence from a range of qualitative and or qualitative study design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Most frequently assessed in the RCTs appraisal tool Data Analyses:

A

Whether appropriate statistical analysis was performed where a sample size justification or power calculation was provided and whether side effects of the intervention were recorded and analysed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Most frequently assessed in the RCTs appraisal tool Data Blinding:

A

Whether the participant, clinician and assessor were blinded to the intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Some critical Appraisal tools for RCT

A
  • Jadad Scale
  • Cochrane Collaboration risk of bias assessment tool
  • PEDro Scale
  • CASP RCT checklist
  • -> 11 Questions based on Users’ Guide
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Reporting a RCT

A

CONSORT statement for reporting RCT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Jadad Scale

A
old one 
Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials 
-only 3 questions 
-easy to apply 
should not take more than 10 minutes 
-end up with a score
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Cochrane Collaboration’s tool for assessing risk of bias Includes:

A
5 categories: 
Selection bias 
performance bias detection bias 
attrition bias 
reporting bias 
other bias 
Scored: low, unclear , high risk of bias 
--> no score, subjective 
--> hard to apply if you don't have a great understanding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Cochrane Collaboration’s tool for assessing risk of bias : Selection of Bias:

A

-random sequence generation

Allocation concealment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Cochrane Collaboration’s tool for assessing risk of bias

Perfromance Bias

A

-Blinding of participants and personnel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Cochrane Collaboration’s tool for assessing risk of bias

Detection Bias

A

Blinding of outcome of assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Cochrane Collaboration’s tool for assessing risk of bias :

Attrition bias

A

Incomplete outcome data
(people do not finish intervention therefore not included
causes biases)

34
Q

Cochrane Collaboration’s tool for assessing risk of bias:

Reporting Bias

A

Selective reporting

35
Q

Cochrane Collaboration’s tool for assessing risk of bias: Other bias

A

Where you put any other information

36
Q

PEDro Scale

A
Scoring PEDro scale: 0 to 10 
-Interpreted by stroke Engine
*high quality = PEDro score 6-10 
*fair Quality = PEDro score 4-5 
*poor Quality = PEDro score <3 
Many people use the score - BUT it is important to note th items that are scored as absent when considering the nature of potential bias 

There is training it is online, trained by the organization

37
Q

PEDro Scale

A

11 questions
advantages: common, easily to apply, like the score, quick
Disadvantage: score misleading,

38
Q

PEDro Scale:

Eligibility criteria were specified

A
External validity 
(this score doesn't count at the end)
39
Q

PEDro Scale:

Randomly Allocated

A
  • The precise method need not be specified

- Quasi-randomization methods do not satisfy this criterion

40
Q

PEDro Scale: Allocation Concealment

A

The practice of keeping researchers and participants unaware of the sequence, with the goal of preventing the researchers from influencing the group assignment of participants

  • Proper randomization depends on the implementation of adequate allocation concealment
  • Sometime you need to read carefully to infer this from methods section
41
Q

PEDro Scale: Groups equivalent at baseline regarding the most important prognostic indicators

A

key outcomes/condition severity

42
Q

PEDro Scale : Blinding of all subjects

A
43
Q

PEDro Scale: Blinding of assessors measuring at least one key outcome

A

-NB: intrials in which key outcomes are self-reported (eg., visual analogue scale, pain diary), the assessor is considered to be blind if the subject was blind

44
Q

PEDro Scale : Measures of at least one key outcome obtained from at least 85% of allocated subjects

A

-Number allocated and assessed should be reported at least for 1 follow up time point

45
Q

PEDro Scale : All subjects received allocated treatment, or data for at least one key outcome was analyzed by “intention to “treat”

A

ITT Vs Per protocol analysis

46
Q

PEDro Scale : Results of between group statistical comparison are reported for at least one key outcome

A

Comparison between groups is reported

47
Q

PEDro Scale : Point estimates and measures of variability are reported for at least one key variable

A

-Point measure is a measure of the size of the treatment effect (mean difference, regression coefficient, etc. ) with a variability measure (eg. Standard Deviation)

48
Q

Why is it important to analyze according to ITT?

(intention to Treat IIT) analysis

A

*random allocation results in group equivalence at baseline usually. Intention to treat analyses are done to avoid the effects of crossover and dropout, which may break the random assignment to the treatment groups in a study
–> Reasons for drop-outs may be due to intervention, especially differential drop-outs
–> Maintains sample size (statistical power) - ideal if all subjects comply and complete
ITT analysis may result in conservative estimate of treatment effect due lack of adherence to protocol
ITT achieves the best estimate if subjects have received the allocated interventions ad there has been minimal loss to follow-up

49
Q

Per Protocol (PP) analysis

A

only subjects who adhered to the protocol i.e., who completed the interventions and outcome assessments, are included in the analysis
-Some risk of bias

50
Q

How would we summarize literature and incorporate critical appraisal results :

A

incorporate Quality - high quality vs low quality studies –> these relate to confidence in the finding

51
Q

Level of Evidence

A

Levels of evidence are not the same as critical appraisal

  • -> they indicate the study designs that are best suited to answer a specific research question
  • Indicate the version you are using, since the table keeps changing
52
Q

levels of Evidence Pyramid

A

Top:
Systematic Reviews and Met analyses of RCT
randomized control trials
Observational Analytical Design: cohort Design
Observational Descriptive Design: Cross sectional, case studies
Background Information/Expert Opinion: Bottom

53
Q

Clinical Practice Guidelines (CPGS)

A

systematically developed statements to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances. They should take into consideration/patient client values and preferences

54
Q

Enhance clinical judgement

A

CPGs are intended to enhance, not replace, clinical judgement and expertise. It is believed that their use will improve process and outcomes of care for patients/clients. For the healthcare system, it is believed that their use will decreased practice variation and optimize resource utilization

55
Q

Guide Practice (CPG)

A

CPGS are used a a practice guide; they are not a “standard of care” and are not legally required for minimum of care (except if they become part of standard of care)

56
Q

Provide Synthesis and Recommendations

A

Provide a synthesis of a topic and recommendations for practice. As such, they are developed based on existing systematic reviews or require that systematic reviews be conducted as an initial step. If there is not enough literature for a systematic review, individual research studies are cited. In the absence of even individual studies, expert opinion consensus may form the basis of recommendations

57
Q

Are there CPG Summaries?

A

Some GPGs are very long and informative, as they may include all aspects of prevention, assessment, and management of a particular condition. Sometimes a quick reference guide, decision rule or prediction rule, or other summary of the guideline is available to make a guideline more accessible for use in clinical practice.

58
Q

How are CPGs Developed: Preparing for CPG Development:

A

Involves selecting a topic or condition, determining the scope of the CPG, identifying and adapting existing CPGs, forming an interdisciplinary guideline development group, and involving consumers of the guideline

59
Q

How are CPGs Developed:Systematic Reviewing of Literature

A

involves establishing specific clinical questions that will be addressed in the guideline, systematically searching, including or excluding identified research, and appraising research (critical appraisal).

60
Q

How are CPGs Developed: Drafting the CPG

A

involves developing recommendations, developing an implementation strategy, consulting on the draft CPG, and writing summary version of the CPG

61
Q

How are CPGs Developed: Reviewing the CPG

A

requires planning for evaluating the impact, revising and updating the CPG

62
Q

How will you know if CPGs are valid

A
  • The validity of CPGs is based on the use of systematic literature reviews in their development
  • Explicit links between the recommendations for Practice and scientific evidence are required
  • Having international, national, or regional guideline development groups that include representatives of all key disciplines and consumer of the CPGs, enhance validity
  • To maintain validity and currency, there should be periodic review - and updating- of the CPG every few years
63
Q

How do CPGs Help Busy Clinicians

A
  • They provide a resource for which the strength of evidence is already evaluated (critically appraised)
  • They provide Recommendations for Practice that are based on evidence and endorsed by the development team
64
Q

Adapting CPGs for clinical Practice: CPGS may:

A
  • Be adapted or created to be setting-specific
  • Be integrated into documentation as an aid to documenting
  • How accompanying clinical practice aids (e.g. clinical pathways or algorithms) that help organize the sequence and time of events of care. There may be clinical algorithms that are complex instructions for addressing a particular issue, such that decisions and consequences are conditional, branching, and/or logical
65
Q

Users’ Guides for Practice Guidelines contain three key questions

A
  1. Are the results of the study valid?
  2. What were the results?
  3. Will the results help in caring for my patients?
66
Q

The Agree II instrument what is it

A

most widely used and accepted critical appraisal tool for CPGs, as it is methodologically rigorous and validated. It is often used to compare CPGs if the several exist on the same topic. In addition, it provides direction for appropriate development and dissemination of guidelines

67
Q

What are the six domains of the AGREE II

A
  1. Scope and purpose
  2. Stakeholder involvement
  3. Rigour of development
  4. Clarity of presentation
  5. Applicability
  6. Editorial Independence

each domain is rated on a 7 point Likert scale for 1= strong disagree to 7 = strongly agree
-each of the 6 domains are to be scored independently and not combined into overal score, as the domains measure different concepts

68
Q

Evidence-to-practice gaps

A
  • Research consistently producing new findings, but findings won’t change patient outcomes unless they are adopted
  • variability between recommendations and clinical practice, across different settings and clinical conditions
  • ->Common across different settings and clinical conditions
  • ->Multiple studies around the world report that, on average, only 50-60% of patients receive evidence-based recommended care*
69
Q

Evidence to practice delays

A

only an estimated 14% of research activity is translated into improvement in patient outcomes, and the translation takes an average of 17 years

70
Q

All breakthrough, no follow through

A
  • For every dollar allocated to develop breakthrough treatments, one cent is allocated to ensure that patients actually receive them
  • Much of the US 240 billion/year worldwide investment in biomedical and healthcare research is wasted due to implementation failures
71
Q

Evidence-to-practice-gaps-PT example

A

Mikhail et al, 2005: Physical therapists’ use of interventions with high evidence of effectiveness in the management of a hypothetical typical patient with acute LBP

  • 68% of PTs used interventions with strong or moderate evidence of effectiveness
  • 90% used interventions with absence of evidence of effectiveness
72
Q

CIHR knowlegde translation definition

A

A dynamic and interactive process that includes the synthesis, dissemination, exchange and ethically sound application of knowledge to improve health, provide more effective health services and products and strengthen the healthcare system

73
Q

Alternate Terms used to Describe KT

A
  • Knowledge transfer
  • Knowledge Uptake
  • Research Utilisation
  • implementation
  • Innovation
  • Quality improvement
  • Knowledge Management
  • Implementation
  • Improvement
  • dissemination and Implementation
  • Knowledge to Action
  • Knowledge Mobilisation
  • Research Translation
74
Q

What knowledge should be translated?

A

Individual studies???

  • Rarely by themselves provide sufficient evidence
  • May be misleading due to bias in conduct or random variation in findings
  • High Quality up to date synthesis of knowledge
  • Cochrane systematic reviews
  • Evidence-based clinical practice guidelines
75
Q

Knowledge-to-Action-Cycle

A
Knowledge inquiry 
Knowledge synthesis
Knowledge Tools/Products 
Identify problem 
Determine the know/do gap 
Identify, Review, Select Knowledge 
-Adapt knowledge to local context 
-Assess Barriers/Facilitators to knowledge Use 
Select, Tailor, Implement Interventions
Monitor Knowledge Use 
-Evaluate Outcomes 
Sustain Knowledge Use
76
Q

1) Knowledge creation funnel

A
  1. Knowledge Inquiry (primary research studies report inconsistent effect)
  2. Synthesis (e.g systematic reviews)
  3. Knowledge Tools and Products (e.g. clinical guidelines)
77
Q

Action Cycle

A
  1. Identify problem (where is the gap? background work and environmental scans to understand the local: e.g population, providers, practice environment, gaps re:evidence-based
  2. Adapt knowledge to local context (e.g. language and cultural adaption)
  3. Assess Barriers/Facilitators to knowledge Use (e.g lack of skills to assess, referral problems, policy problem)
  4. Select, Tailor, Implement Interventions (e.g training health providers, redesigned service delivery)
  5. Monitor Knowledge Use
  6. Evaluate Outcomes
  7. Sustain Knowledge Use
78
Q

Barriers to the uptake of research into practice:

Barriers could be categorized into:

A
  • Structural (e.g financial disincentives)
  • Organisational (e.g. inappropriate skill mix, lack of facilities or equipment )
  • Peer group (e.g local standards or care not in line with desired practice)
  • Individual (e.g. knowledge, attitudes, skills)
79
Q

Barriers to the uptake of research into practice-common barriers

A
  • Barriers vary depending on context, but common barriers include:
  • patient expectations
  • Lack of knowledge and/or skills
  • Lack of time
  • Practitioner pr practice culture i.e resistance to change
  • not enough definitive evidence to guide practice
  • lack of skills for searching, appraising, and interpreting
  • Lack of incentives
  • Relevant literature not compiled all in one place
80
Q

What can change practice

A
  • Knowledge translation (implementation) interventions
  • -> complex interventions designed to change clinical behaviour (organisational, practitioner, patient/consumer or policy) for instance:
  • interventions targeted at healthcare organizations
    e. g organizational culture
  • interventions targeted at healthcare workers
  • -> Audit and feedback; Educational meeting; Educational outreach
  • ->Interventions targeted at specific types of practice, conditions or settings
81
Q

How do you keep up to date

A

Read (an critically appraise) regularly

  • Journal articles, especially systematic reviews
  • Evidence-based clinical practice guidelines
  • Journal clubs
  • Attend (evidence-based) conferences
  • Not all up to you:
  • -> systems need to ensure regular review of your own performance and you must be willing to change