Study Group - Evaluation & Research Flashcards
Reliability
Whether results can be measured consistently (can be reproduced under similar circumstances)
Validity
Accuracy of measurement (do results represent what should be measured)
Rigor
Confidence findings/results of evaluation are true representation of what occurred as result of program
Random Assignment
Process of determining on random basis who does & does not receive health program/intervention
Random Selection
Random identification from intended population of those who will be in program and/or evaluation
CBPR
Research in which evaluators collaborate with community members
- Improves likelihood of success & stronger impact with target population
Measurement vs Classification
Measurement - process of sorting & assigning #s to people in quantitative evaluations
Classification - assigning people into set of categories in qualitative evaluations
Causal Inference
Intellectual discipline that considers assumptions, study designs, & estimation strategies
- Allows researchers to draw conclusions based on data
Informal Interviewing
One–ended conversation with goal of understanding program from respondent’s perspective
- Continues until no new information is gathered & there is full understanding of the program
Triangulation
Examines changes or lessons learned from different points of view or in different ways
Quality Assurance
Using minimum acceptable requirements for processes & standards for outputs
Nonresponse Bias
- Lack of responses
- Failure of providing data
- May be due to attrition
Response Bias
Intentional or unconscious way individuals select responses
Dissemination
spreading information widely
- new publications take 17 years to be widely implemented
Implementation Science
Identifies factors, processes, & methods that increase likelihood of evidence-based interventions to be adopted & used to sustain improvement in population health
Translational Research
Studying & understanding progression of “bench-to-bedside-to-population”
- How scientific discoveries lead to effectiveness & efficacy in studies which lead to dissemination into practice
Meta-Analysis
Quantitative technique for combing results from multiple, different evaluations on same topic
- Could provide information as to whether findings are strong over variations of populations,, settings, programs & outcomes
Measurement Reliability
type of random error which same measure gives same results on repeated applications
Interrater Reliability
Correlation between different observers at same point in time
Intrarater Reliability
Correlation between observations made by same observer at different points in time
Nonrandom Error
Measure is systematically higher or lower than true score
IRB
Group of individuals that review potential research proposals that involve human subjects/participants
- Approval must be granted prior to beginning data collection*
Institutional Review Board
Clinical Significance
Likelihood intervention is to have noticeable benefit to participants
Statistical Significance
Likelihood one would be to get result by chance
- 0.05 (usually used)
Continuous Quality Improvement (CQI)
Tool to reduce costs while improving quality of services
- enhances organizational effectiveness
Evaluation Plan Framework
- Organize evaluation process
- Procedures for managing & monitoring evaluation
- Identify what to evaluate
- Formulate questions to be answered
- Timeframe for evaluation
- Plan for evaluating implementation objectives (process)
- Plan for evaluating impact objectives
- Targeted outcomes (outcome objectives)
Steps of Effective Evaluation
- Defining research population
- Identifying stakeholders & collaborators
- Defining evaluation objective
- Selecting research design that meets evaluation objective
- Selecting variables for measurement
- Selecting sampling procedure
- Implementing research plan
- Analyzing data
- Communicating findings
CDC evaluation standards
Utility, Feasibility, Propriety, Accuracy
What is utility (evaluation standard)?
Ensure information needs of intended users are satisfied
What is feasibility (evaluation standard)?
Conduct evaluations that are VIABLE & REASONABLE
What is propriety (evaluation standard)?
Behave legally, ethically, & with regard for welfare of participants of program and those affected by program
What is Accuracy (evaluation standard)?
Provide accurate information for determining merits of program
How can evaluators have less bias in their data collection?
use evaluation questions that allow for more than 1 answer
What are performance measures?
Indicators of process, output, or outcomes that have been developed for use as standardized indicators by health programs, initiatives, practitioners or organizations
What should performance measures be aligned with?
Objectives
Evaluations should be __________________
Useful, feasible, ethical, accurate, & accountable
Data collection must be _____________ by decision makers & stakeholders
Relevant
What is evaluation used in needs assessment?
- Evaluating primary, secondary data, observations, & interviews
- Evaluating literature
What is evaluation used in program implementation?
Evaluating progress of program based on health indicators
Why is process evaluation important?
- Understanding internal & external forces that can impact activities of program
- Maintain and/or improve quality & standards of program performance and delivery
- May serve as documentation of provisions & success of those provisions of program
Attainment Evaluation Model
Uses evaluation standards & instruments upon elements that yield objectives & goals of programD
Decision-Making Evaluation Model
- Uses instruments that focus on elements that yield context, input, processes, & products to use when making decisions
- Evaluates criteria that are used for making administrative decisions in the program
Goal-Free Evaluation Model
Instruments provide all outcomes (including unintentional positive/negative outcomes)
Systems-Analysis Evaluation Model
Uses instruments that serve to quantify program’s effects
What should evaluator consider when choosing evaluation design?
- Causality
- Bias
- Retrospective vs Prospective
- Time span
- Finances
- Current political climate
- # of participants
- Type of data being collected
- Data analysis & skills
- Access to group to use for comparative purposes
- Possibility to distinguish b/w exposed & unexposed to program intervention
- Type of outcome being evaluated (unbound vs bound)
What are the different types of evaluation designs?
- one group posttest only
- one group pre- & posttest
- Comparison group posttest only
- two group pre- & posttest
- one group time series
- Multi-group time series
- two group retrospective (case control)
- two group prospective (cohort)
- two group pre- & posttest with random assignment (RCT)
Process Evaluation
- Any combination of measures that occurs as program is implemented
- Ensures or improves quality of performance or delivery
- Assesses how much intervention was provided (dose), to whom, when, & by whom
Impact Evaluation
- Focuses on ultimate goal, product, or policy
- Often measured in terms of HEALTH STATUS, MORBIDITY, & MORTALITY
Outcome Evaluation
- Short term, immediate, & observable effects of program leading to desired outcomes
- What changed about public health problem?
Summative Evaluation
Evaluation occurs after program has ended
- designed to produce data on program’s efficacy or effectiveness during implementation
- Provides data on extent of achievement of goals regarding learning experience
Formative Evaluation
Conducted before program begins
- designed to produce data & information used to improve program during developmental phase
- Documents appropriateness & feasibility of program implementation
- ensure fidelity of program
Effectiveness
Degree of how successful program is in producing desired result
Clinical Effectiveness
Improving health of individual patients through medical care services
Population Effectiveness
Improving health of populations & communities through medical and/or non-medical services
Efficiency
How well program and/or intervention can produce positive results
few inputs + higher outputs = MORE EFFECIENT
Production Efficiency
Combining inputs to produce services at lowest cost
Allocative efficiency
Combining inputs to produce maximum health improvements given available resources
Equity
Maximum potential effect under ideal circumstances
Measurement tools must be ____________
Valid & Reliable
Procedural Equity
Maximizing fairness in distribution of services across groups
Substantive Equity
Minimizing disparities in distribution of health across groups or different populations
What should be tested/assessed for when considering using existing data collection instruments? Why?
Literary reading level (using or adapting) to ensure validity of responses
What specific readability tools are there to help with this?
SMOG & FleschKincaid
What does SMOG stand for?
Simple Measure of Gobbledegook
What are the advantages to using existing data collection instruments?
- Previously tested for reliability & validity
- Direct comparison measures
- Reduced cost
- User familiarity
What is a disadvantage to using existing data collection tools?
Potential for unreliable measures with different population demographics & situations
What does most appropriate data collection instrument depend on?
- Intent of program
- Intent of evaluation
- Information being acquired
What should HES consider when using existing data collection instruments?
- If item is appropriate for intended purpose
- If language is appropriate or population
- Whether test has been performed using sample from intended audience
What should HES do when only using part of data collection instrument to maintain validity?
- Aspects of questions should be retained
- Give credit for using item/collection tool
Why would HES make modifications to content, format, or presentation of question, questionnaire, or instrument?
- Adapting to data needs
- To have results that are more versatile & useful
Nominal/Dichotomous measurement & give an example
Cannot be ordered hierarchically but are mutually exclusive
- Male/Female
- Yes/No
Ordinal measurement & give an example
Provides information based on order, sequence, or rank
- scale from strongly disagree to strongly agree
Interval measurement & give an example
Common unity of measurement with no true zero
- Temperature
Ratio measurement & give an example
Common measurement between each score & have true zero
- height, weight, age, etc.
Ways to Measure Reliability
- Test-Retest
- Internal Reliability/Consistency
- Split-Half Method
Test-Retest
Same measurement administered at 2 points in time
Internal Reliability/Consistency
Consistency measuring multiple/all items it is meant to measure
Split-Half Method
- 2 parallel forms to administered at same point in time
- Correlation calculated b/w them
Internal Validity
Degree program caused change that was measured
- Were changes in participants due to program or by chance?
External Validity
Generalizability of results beyond participants
- Would results be the same with different target population?