Week 12: Evaluating Research Reports Flashcards
Evidence Base Practice: health care providers
- incorporate research findings into clinical judgement and treatment decisions
- evaluate research reports
- determine if findings provide sufficient evidence to support clinical practices
EBP: critical appraisal
- determine scientific merit of research report
- applicability to clinical decision making
When evaluating any research report what do you examine to determine?
- validity of design and analysis
- meaningfulness of findings
- relevance of results to practice
- not enough to answer yes or no you need rationale or implications of answer to evaluate study’s value
Research validity - explanatory design types
- statistical conclusion validity
- internal validity
- construct validity
- external validity
Statistical conclusion validity
- is there a relationship between the independent and dependent variables
- appropriate use of statistical procedures for analyzing data
internal validity
is there evidence of a causal relationship between IV and DV
construct validity
to what constructs can results be generalized
external validity
can the results be generalized to other persons, settings or times
Threats to statistical conclusion validity: statistical power
- statistical power: ability to document real relationships between IV and DV (low power = may not be able to identify statistical relationship) and often affected by sample size
threats to statistical conclusion validity: violated assumptions
- violated assumptions: most statistical tests based on assumptions if not met = erroneous inferences
Statistical conclusion validity: variance reliability
- statistical conclusions threatened by extraneous factors that increase variability within data
- unreliable measurement
- failure to standardize protocol
- environment interferences
- heterogeneity of subjects
failure to use intention to treat analysis
- should maintain randomization to groups
- analyst according to original assignment
Evaluation process - introduction (what should the introduction have/do)
- establish problem being investigated an dits importance
- demonstrate that researchers have thoroughly synthesized literature
- provide rationale for pursuing this research
- contribution of this research
- purpose aims Or hypotheses
Evolution process: methods:
- participants (target population & accessible population)
- recruitment
- inclusion/exclusion criteria
- sample size: authors should describe how estimated (related to power)
design
- appropriate to answer research question
- control for potential confounding variables
- rationale for choices of interventions and measurements
- adequacy of time frame of study
evaluation process: data collection (methods)
- operational definition of variables
- measurement reliability (assessed within the study/based on prior research)
- measurement validity: prior research
- data collection described clearly
Evolution process: results
- participants complete protocol as originally designed
- all participants accounted for in final measurements
- results address research question
- effect size
- results support or refute the propose hypothesis
evolution process discussion
- interpret results and alternative explanations considered for findings
- address each research question
- relationship to other research
- limitations
- clinical importance of findings (clinical vs statistical significance there is a difference)
evaluation process: final step
- findings of sufficient strength to inform patient management
- study participants similar to patient
- acceptability to patient
- feasibility oof intervention (clinical setting, resources available)
External validity considerations
(intervention studies)
participants should specify
- recruitment methods
- inclusion and exclusion criteria
- sample size estimates
Internal validity considerations: design
(intervention studies)
study description should indicate
- type of design
- number of groups
- number, levels of IV
- dependent variables, frequency of measures
- randomization - if repeated measures
- equal treatment for groups except for experimental intervention
Internal validity considerations: data collection
(intervention studies)
study descriptions should provide
- operational definitions (intervention and measurement procedures)
- replication
- measurement tool reliability and validity
- groups treated equally except for intervention
- bias control = blinding
Statistical conclusion validity considerations: data analysis
(intervention studies)
- confidence intervals
- confirmation that groups are similar at baseline on relevant characteristics (how are differences handled)
- adherence to study protocol
- attrition (do not complete study) –flowchart–account of differential attrition in groups
- intention to treat
Meaningfulness of results (intervention studies)
- state whether hypotheses were supported or not
- provide reasons on hypotheses
- confidence intervals effect sizes should be reported
- data should reflect amount of change or difference
- if differences not significant address potential for type 2 error
external validity considerations
(diagnostic studies)
participants
- sample representative of patients whom test would apply
- reflect full range of condition (variance) often purposive sampling
- inclusion and exclusion criteria specified
Internal validity considerations: design (diagnostic studies)
- meaningfulness based on criterion validity
- index test compared with reference standard
- validity of reference standard must be documented
internal validity considerations: data collection (diagnostic studies)
- both index test and reference test given to all studies
- methods of measurement well documented
- explanation of means of bias control by blinding testers from subject’s true diagnosis
statistical conclusion validity considerations (diagnostic studies)
- data analysis:
- sensitivity, specificity, predictive values, likelihood ratios
- indication of tests ability to determine posttest probabilities
- Confidence intervals
- flow diagram: how attrition affected ratios