L2 Evidence Based Practice and Assessment Flashcards
what is evidence-based practice
“the integration of the best research evidence with clinical expertise and patient values” (Sackett, Straus, Richardson, Rosenberg, & Haynes,
2000; 1)
three pillars of assessment-based practice
- scientific evidence - journals, research etc
- clinical expertise - experience as a clinician
- patient values - this is especially important in a multicultural society
what is TEKA
Total evidence and knowledge approach
eminence based practice
relying on the opinion of a medical specialist or other prominent health official when it comes to health matters, rather than relying on a careful assessment of relevant research evidence
habit-based practice
relying on doing the same thing over and over again because you know it works, over examining the evidence and coming up with the best plan for a specific patient
convenience-based practice
relying on something because it is the most convenient way to do it, rather than examining the relevant evidence to come up with the best care plan
levels of evidence
- Levels of evidence provide a hierarchical order for a range of research designs based on their potential for bias
1. Systematic reviews
2. Critically-appraised topics (evidence syntheses)
3. Critically-appraised individual articles (article synopses)
4. Randomized control trials (RTCs)
5. Cohort studies
6. Case-controlled studies, Case series/reports
7. Background information/Expert opinion - 1-3 consist of filtered information
- 4-6 consist of unfiltered information
sensitivity
- of all the people who have a disease (ie. are positive for a disease), what proportion of them actually test positive
- ie. how many positive people the test will actually identify
- determined by the function of the characteristics of the test itself, not affected by the prevelance of the disease
specificity
of all the people who are negative, what proportion of them actually test negative
different to positive and negative predictive values - these relate to the incidence rates of the disease not the test itself
reliability
- the degree to which a test or tool produces similar results under consistent conditions
- ie the precision of the test
intra-rater reliablility
the degree of agreement among repeated administrations of a diagnostic test performed by a single rater
inter-rater reliability
the degree of agreement among independent observers who assess the same phenomenon
validity
the degree of agreement among independent observers who assess the same phenomenon