Exam 2 Flashcards
What is validity?
Truthfulness, meaningfulness, usefulness, and/or accuracy of study results
External vs Internal Validity
External Validity: generalizability of results
Internal Validity: controlled by the study design (blinding, instrumentation, attrition)
What is face validity?
Does a specific measure actually measure what it is designed to measure
What is content validity?
Does the measure represent all constructs of the measure (does it take all things into account)
What is concurrent validity?
Comparing your intervention to the Gold Standard
What is predictive validity?
Can it be used to predict a future score/outcome
What is construct validity?
How well the measure captures a defined entity (theoretical construct)
What is convergent validity?
Examines the degree to which the operationalization is similar to other operations that is should be similar to (one head start compared to others)
What is discriminant validity?
Examines the degree to which one thing differs from others (one head start compared to non-head starts)
How is validity typically measured?
Correlations, -1 to 1
What analyses are used for which data types?
Interval and Ratio (Continuous) - Pearson
Ordinal - Spearman Rank
Nominal (Dichotomous) - Phi
What is Reliability?
Consistency of a specific measure
Ability to produce consistent repeated measures of a test
What are the two components of reliability
True Component Error Component (variety of sources)
What type of data is required for Reliability Measures?
Continuous - Ratio or Interval
What are the breakdown scores for ICC (Reliability)?
Good: > .75
Moderate: .51 - .75
Poor: < .50
What type of data is agreement?
Categorical (Nominal)
Kappa statistic takes out the chance aspect
What are the Kappa score breakdowns?
Almost Perfect: .81 - 1.0 Substantial: .61 - .80 Moderate: .41 - .60 Fair: .21 - .40 Slight: .01 - .20 Poor (equal to chance): < 0
What is the Minimal Detectable Change?
Smallest amount of change an instrument can accurately measure
Changes must exceed MDC to be beyond measurement error
Does not provide clinical meaningfulness
What is the Minimal Clinically Important Difference?
Smallest difference that clinicians and patients would care about
Identify change in health status measure associated with improvement that is meaningful
Compares two measures (Pain: VAS, clinician-derived measure: ROM)
What is Ceiling Effect?
Instrument does not register a further increase in score for higher scoring individuals
What is Floor Effect?
Instrument does not register a further decrease in score for lower scoring individuals
What variables are Statistically significant?
p-values
Precision of estimation/confidence intervals
Type 1 and Type 2 errors
Power
What variables are Clinically significant?
Size of the difference
Does change exceed MCID
Effect Size measurements
Specificity, sensitivity, LR, NNT, RR, ARR
p-values
Risk of Type 1 error
Does not indicate importance or clinical relevance
What are type 1 errors?
Reject the null hypothesis when it is actually true
Conclude a difference exists, but it doesn’t actually exist
Rare (False Positive)
What are Type 2 errors?
Do not reject null hypothesis when it is actually false No significant difference detected, but a difference exists More Common (False Negative)
What factors impact statistical power?
Significance (a)
Effect size (differences between measures and variance)
Sample size
How does sample size impact power?
Larger sample increases the ability to detect smaller differences between groups
What is effect size?
Determines magnitude of treatment effect (meaningfulness of results)
Allows normalized comparison of results (removes units from outcomes)
Accounts for variation across samples
What is Cohen’s d?
Most common way to express effect size
Usually positive, negative indicates a decrease (pain)
What are the breakdowns for effect size scores?
Large: .80
Moderate: .50
Small: .20
How are SEM, MDC, and MCID related?
SEM and MDC provide context, but MCID provides meaning
What errors influence statistical power?
Type II Errors
Sample size and variance
Sensitivity =
a/(a + c)
SnOut
Specificity =
d/(b + d)
SpIn
+LR =
Sensitivity/(1 - Specificity)
-LR =
(1 - Sensitivity)/Specificity
PPV =
a/(a + b)
NPV =
d/(c + d)
What are Likelihood Ratios?
Incorporate sensitivity and specificity
Provide a direct estimate of how much a test result will change the odds of having that condition
What are the LR breakdowns?
Strong (conclusive): +LR >10/-LR < .1
Moderate (important): +LR 5 - 10/-LR .1 - .2
What is the order of the Diagnostic Process?
Pre-test probability (prevalence)
Patient History (develop working hypothesis)
Select specific tests to confirm/refute
Post-test probability (treatment threshold: likelihood patient has that disorder)
What is the PEDro scale?
Allows quantification of the quality of a research study
10-point scale (11 questions)
Designed for clinical trials, but may be used for other types of studies
Disease-oriented vs patient-oriented outcome measures
Disease-oriented: Physiology of illness (ROM)
Patient-oriented: direct patient interest, patient-oriented evidence that matters (POEMs), functional aspects of the loss of ROM
Clinician-derived vs patient self-report outcome measures
Clinician-derived: almost always disease-oriented (MMT, ROM)
Patient self-report: general or global health, survey patient fills out
What is a disability?
Inability/Limitation in performing socially defined roles/tasks expected of an individual within a sociocultural and physical environment due to functional limitations
Where is the paradigm shift in measuring outcomes moving towards?
Not only measuring impairments, but also quantifying changes in: functional limitations, disability, and QOL
What are the Objective Outcome Measures?
ROM, MMT, Limb Girth, Blood Count
What are the Subjective Outcome Measures?
Self-Report Questionnaires (by patient or clinician)
These focus on functional limitations, disability, or QOL
Disparity between patient and clinician-reported (clinicians rate higher)
What are Global Health Measures?
Lean towards indicators of disability and QOL
Good at tracking patients with chronic diseases (limited ability for active populations: ceiling effect)
Examples: SF-36 and SF-12, Global Rating of Change, Sickness Impact Profile
Pros and Cons of SF-36
Low scores indicative of greater disability
Pros:
You can give it everyone
Helps you refer to other health professions
Cons:
Not specific
Embarrassing to answer
May answer how they think PT wants them to
Region-Specific Health Questionnaires
Scales to specifically look at body-region of interest
Focus more on functional limitations and disability
Good at tracking recovery from specific pathologies
Better utility for active populations
Examples: FAAM, DASH, LEFS
Can use it at baseline, during, and after for tracking
Oswestry Low Back Disability Index
Measure patient’s impairment and QOL in relation to LBP
10 questions
Higher scores = greater disability
Disabilities of the Arm, Shoulder, and Hands (DASH)
30 item, self-report to measure physical function and symptoms of musculoskeletal disorders of upper limb
Higher score = greater disability
Lower Extremity Functional Scale (LEFS)
Intended for use on adults with lower extremity conditions
20 items, 5-point scale
Higher scores = better function
Dimension Specific Health Questionnaires
More focus on psychosocial
Assess specific physical or emotional phenomenon (pain, anxiety, depression
Needs to be valid for population
Examples: Beck depression index, Pain disability Index, McGill Pain Questionnaire
Single Item Outcome Measures
Single Assessment Numeric Evaluation (SANE): rate current level of function for ADLs compared to prior level of function
Unidimensional
Too vague: not anchored directly to particular injury
What are the limitations of General Health Outcome Tools?
Not population specific
Physically active populations may be always seen as being “relatively” healthy compared to the whole population
How to pick the best outcome measure?
Measure should match the purpose
Able to discriminate among patients (validity)
Capacity to assess change over time (reliability, MDC, MCID)
What is measured value?
True Value + Error
Smaller error provides better indication of true value
Increased confidence in measure value
What is the Standard Error of Measurement?
Absolute reliability (typical error of individual score) Quantifies consistency of value in same digits as the measure Provides insight into meaningful changes
What are the 2 components of MCID?
Anchor-based: Linked to external anchor (global rating of change), dependent on recall and subject to bias
Distribution-based: based on statistical characteristics of the sample population (change beyond chance)l doesn’t account for patient perspective
Strengths of MCID
Threshold to detect change Accounts for patient perspective Set treatment goals (clinical) Determine sample size (research) Demonstrate treatment effectiveness
Limitations of MCID
Not universal fixed attribute (threshold without ranges)
Calculation methods vary (produce range of results)
Not transferable across patient populations (MCID ranges may vary)
What is a single subject design?
Prospective, extended baseline, controlled conditions
What is a case report (clinical case report)?
Description of clinical practice, non-experimental
Prospective or Retrospective (may be easier since you know more about them, highlights importance of documentation and outcome measures)
What is a case study?
Qualitative design, experimental
What is the hierarchy of Clinical Research Design?
Meta-Analysis Systematic Review RCT Cohort Study Outcomes Studies Case Control Study Cross Sectional Study Case Series Case Report
What are the purposes of a Case Report?
Share Clinical Experiences Illustrating EBP Develop Hypotheses for Research Build Problem-Solving Skills Test Theory Persuade and Motivate Help Develop Practice Guidelines and Pathways
What are the impacts of Case Reports?
Change clinical practice
Highlight unique patient presentation/diagnosis (special tests, imaging, metrics related to diagnostic accuracy [Sn, Sp, LRs])
Framework for treatment
Suggest areas for further research
What are the limitations to sensitivity and specificity?
Not clinically intuitive
Don’t change probability
What control measurement techniques are used in Case Reports?
Clinical
Functional
Patient-reported Function
What is the generalizability of Case Reports?
Most applicable to patient care (used in patient care, brings in the aspect of patient experience, not a controlled environment)
Least rigorous approach (sacrifice internal validity from confounding factors and smaller details)
Single person not representative of population (everyone is different, so RCTs and case reports can’t be applied in both ways to each person)
What is the quality of Case Reports?
Quality guidelines don’t exist
Many journals have suggested guidelines
ICF model may provide some structure
What is the format of a Case Report?
Intro (review of relevant literature, purpose statement)
Methods/Case Description
Results
Discussion and Conclusion
What makes up the intro portion of a case report?
Relevant literature review Patient condition Rationale for intervention and outcomes measures Indicate knowledge gap Purpose Statement Convince reader the topic is important
What makes up the Methods/Case Description portion of a case report?
Describe the patient:
Demographics
Past Medical History
Unique Presentation/Diagnosis
Examination Data:
Include reliability and validity
Rationale for selection
Describe clinical decision-making process (evaluation):
Treatment Approach
What makes up the results portion of a case report?
Describe the outcomes (each follow up point)
Consider table, graph, flow chart (chronological order)
Provide the facts (interpretation of findings reserved for discussion)
What makes up the discussion and conclusion portion of a case report?
Provide context to the results (meaning and application, related to MDC and MCID)
Compare and contrast to existing literature (relate to intro section)
Make specific recommendation to advance clinical practice (future research)
Clinical relevance
What should you be careful about in the discussion and conclusion section?
Don’t overgeneralize results
Can’t determine cause and effect (usually implied) since there is no control group
Does case report data have to be done consecutively?
No, one person can be from January - March, then another from July - September
Why is it important to track clinical outcomes?
Identify patterns
Insurance companies pay for outcomes
Treatments that work better/worse for future reference
Determine clinician effectiveness
What is Fee-for-Service?
Clinicians are paid based on the volume of services, not value
What is the Merit-based incentive payment system?
Performance-based payment adjusted to Medicare payment
Aggregate score across 4 categories determines payment adjustment (Quality, cost, advancing care info, improvement)
Unfortunately, many things not in your control
What is a Risk Adjustment?
Accounts for variables that influence outcomes (age, acuity, comorbidities, medication use)
Set reasonable goals
Helps predict number of visits required to achieve predicted outcome
What’s the purpose of benchmark data?
Effectiveness
Efficiency
Patient satisfaction
What is Evidence-based practice?
Integration of the best research evidence with clinical experience and patient values to make clinical decisions
Why do we need EBP?
Most clinical practices are handed down without ample scientific evidence demonstrating clinical efficacy
Need to be accountable for the care we render
EBP = accountability (patients, employers, insurance)
Link theory to practice (we don’t know if treating hamstring strains with US followed by flexibility exercises makes a difference in their outcome)
Why is EBP important?
Patient care improved
Third-party reimbursement
Development of knowledge base
Development and maintenance of respect within the healthcare community
What is evidence based practice not?
Cookbook/blueprint for practice
Conspiracy to discount what clinicians have been previously taught
Shouldn’t replace clinical judgment
Shouldn’t restrict practice
What are the Core dimensions of expert practice in PT?
Knowledge
Clinical Reasoning
Movement
Virtues
Clinical practice is:
Challenging Complex and uncertain Constantly changing and patient centered Demands innovation and creativity Perfect venue for the development of clinical reasoning abilities
What is clinical reasoning?
The sum of the thinking and decision-making processes associated with clinical practice
What is critical thinking?
One cognitive component of clinical reasoning where you analyze the evidence that exists in the literature, but it doesn’t encompass the contextual factors (patient and environmental factors from ICF framework) that are important in the reasoning process