Study Group - Evaluation & Research Flashcards

1
Q

Reliability

A

Whether results can be measured consistently (can be reproduced under similar circumstances)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Validity

A

Accuracy of measurement (do results represent what should be measured)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Rigor

A

Confidence findings/results of evaluation are true representation of what occurred as result of program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Random Assignment

A

Process of determining on random basis who does & does not receive health program/intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Random Selection

A

Random identification from intended population of those who will be in program and/or evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

CBPR

A

Research in which evaluators collaborate with community members

  • Improves likelihood of success & stronger impact with target population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Measurement vs Classification

A

Measurement - process of sorting & assigning #s to people in quantitative evaluations

Classification - assigning people into set of categories in qualitative evaluations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Causal Inference

A

Intellectual discipline that considers assumptions, study designs, & estimation strategies

  • Allows researchers to draw conclusions based on data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Informal Interviewing

A

One–ended conversation with goal of understanding program from respondent’s perspective

  • Continues until no new information is gathered & there is full understanding of the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Triangulation

A

Examines changes or lessons learned from different points of view or in different ways

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Quality Assurance

A

Using minimum acceptable requirements for processes & standards for outputs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Nonresponse Bias

A
  • Lack of responses
  • Failure of providing data
  • May be due to attrition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Response Bias

A

Intentional or unconscious way individuals select responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dissemination

A

spreading information widely

  • new publications take 17 years to be widely implemented
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Implementation Science

A

Identifies factors, processes, & methods that increase likelihood of evidence-based interventions to be adopted & used to sustain improvement in population health

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Translational Research

A

Studying & understanding progression of “bench-to-bedside-to-population”

  • How scientific discoveries lead to effectiveness & efficacy in studies which lead to dissemination into practice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Meta-Analysis

A

Quantitative technique for combing results from multiple, different evaluations on same topic

  • Could provide information as to whether findings are strong over variations of populations,, settings, programs & outcomes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Measurement Reliability

A

type of random error which same measure gives same results on repeated applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Interrater Reliability

A

Correlation between different observers at same point in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Intrarater Reliability

A

Correlation between observations made by same observer at different points in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Nonrandom Error

A

Measure is systematically higher or lower than true score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

IRB

A

Group of individuals that review potential research proposals that involve human subjects/participants

  • Approval must be granted prior to beginning data collection*

Institutional Review Board

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Clinical Significance

A

Likelihood intervention is to have noticeable benefit to participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Statistical Significance

A

Likelihood one would be to get result by chance

  • 0.05 (usually used)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Continuous Quality Improvement (CQI)

A

Tool to reduce costs while improving quality of services

  • enhances organizational effectiveness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Evaluation Plan Framework

A
  1. Organize evaluation process
  2. Procedures for managing & monitoring evaluation
  3. Identify what to evaluate
  4. Formulate questions to be answered
  5. Timeframe for evaluation
  6. Plan for evaluating implementation objectives (process)
  7. Plan for evaluating impact objectives
  8. Targeted outcomes (outcome objectives)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Steps of Effective Evaluation

A
  1. Defining research population
  2. Identifying stakeholders & collaborators
  3. Defining evaluation objective
  4. Selecting research design that meets evaluation objective
  5. Selecting variables for measurement
  6. Selecting sampling procedure
  7. Implementing research plan
  8. Analyzing data
  9. Communicating findings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

CDC evaluation standards

A

Utility, Feasibility, Propriety, Accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is utility (evaluation standard)?

A

Ensure information needs of intended users are satisfied

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is feasibility (evaluation standard)?

A

Conduct evaluations that are VIABLE & REASONABLE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is propriety (evaluation standard)?

A

Behave legally, ethically, & with regard for welfare of participants of program and those affected by program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is Accuracy (evaluation standard)?

A

Provide accurate information for determining merits of program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How can evaluators have less bias in their data collection?

A

use evaluation questions that allow for more than 1 answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are performance measures?

A

Indicators of process, output, or outcomes that have been developed for use as standardized indicators by health programs, initiatives, practitioners or organizations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What should performance measures be aligned with?

A

Objectives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Evaluations should be __________________

A

Useful, feasible, ethical, accurate, & accountable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Data collection must be _____________ by decision makers & stakeholders

A

Relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is evaluation used in needs assessment?

A
  • Evaluating primary, secondary data, observations, & interviews
  • Evaluating literature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is evaluation used in program implementation?

A

Evaluating progress of program based on health indicators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Why is process evaluation important?

A
  1. Understanding internal & external forces that can impact activities of program
  2. Maintain and/or improve quality & standards of program performance and delivery
  3. May serve as documentation of provisions & success of those provisions of program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Attainment Evaluation Model

A

Uses evaluation standards & instruments upon elements that yield objectives & goals of programD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Decision-Making Evaluation Model

A
  • Uses instruments that focus on elements that yield context, input, processes, & products to use when making decisions
  • Evaluates criteria that are used for making administrative decisions in the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Goal-Free Evaluation Model

A

Instruments provide all outcomes (including unintentional positive/negative outcomes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Systems-Analysis Evaluation Model

A

Uses instruments that serve to quantify program’s effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What should evaluator consider when choosing evaluation design?

A
  1. Causality
  2. Bias
  3. Retrospective vs Prospective
  4. Time span
  5. Finances
  6. Current political climate
  7. # of participants
  8. Type of data being collected
  9. Data analysis & skills
  10. Access to group to use for comparative purposes
  11. Possibility to distinguish b/w exposed & unexposed to program intervention
  12. Type of outcome being evaluated (unbound vs bound)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What are the different types of evaluation designs?

A
  1. one group posttest only
  2. one group pre- & posttest
  3. Comparison group posttest only
  4. two group pre- & posttest
  5. one group time series
  6. Multi-group time series
  7. two group retrospective (case control)
  8. two group prospective (cohort)
  9. two group pre- & posttest with random assignment (RCT)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Process Evaluation

A
  • Any combination of measures that occurs as program is implemented
  • Ensures or improves quality of performance or delivery
  • Assesses how much intervention was provided (dose), to whom, when, & by whom
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Impact Evaluation

A
  • Focuses on ultimate goal, product, or policy
  • Often measured in terms of HEALTH STATUS, MORBIDITY, & MORTALITY
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Outcome Evaluation

A
  • Short term, immediate, & observable effects of program leading to desired outcomes
  • What changed about public health problem?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Summative Evaluation

A

Evaluation occurs after program has ended

  • designed to produce data on program’s efficacy or effectiveness during implementation
  • Provides data on extent of achievement of goals regarding learning experience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Formative Evaluation

A

Conducted before program begins

  • designed to produce data & information used to improve program during developmental phase
  • Documents appropriateness & feasibility of program implementation
  • ensure fidelity of program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Effectiveness

A

Degree of how successful program is in producing desired result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Clinical Effectiveness

A

Improving health of individual patients through medical care services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Population Effectiveness

A

Improving health of populations & communities through medical and/or non-medical services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Efficiency

A

How well program and/or intervention can produce positive results

few inputs + higher outputs = MORE EFFECIENT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Production Efficiency

A

Combining inputs to produce services at lowest cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Allocative efficiency

A

Combining inputs to produce maximum health improvements given available resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Equity

A

Maximum potential effect under ideal circumstances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Measurement tools must be ____________

A

Valid & Reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Procedural Equity

A

Maximizing fairness in distribution of services across groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Substantive Equity

A

Minimizing disparities in distribution of health across groups or different populations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What should be tested/assessed for when considering using existing data collection instruments? Why?

A

Literary reading level (using or adapting) to ensure validity of responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What specific readability tools are there to help with this?

A

SMOG & FleschKincaid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What does SMOG stand for?

A

Simple Measure of Gobbledegook

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What are the advantages to using existing data collection instruments?

A
  • Previously tested for reliability & validity
  • Direct comparison measures
  • Reduced cost
  • User familiarity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What is a disadvantage to using existing data collection tools?

A

Potential for unreliable measures with different population demographics & situations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What does most appropriate data collection instrument depend on?

A
  • Intent of program
  • Intent of evaluation
  • Information being acquired
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What should HES consider when using existing data collection instruments?

A
  • If item is appropriate for intended purpose
  • If language is appropriate or population
  • Whether test has been performed using sample from intended audience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What should HES do when only using part of data collection instrument to maintain validity?

A
  • Aspects of questions should be retained
  • Give credit for using item/collection tool
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Why would HES make modifications to content, format, or presentation of question, questionnaire, or instrument?

A
  • Adapting to data needs
  • To have results that are more versatile & useful
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Nominal/Dichotomous measurement & give an example

A

Cannot be ordered hierarchically but are mutually exclusive

  • Male/Female
  • Yes/No
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Ordinal measurement & give an example

A

Provides information based on order, sequence, or rank

  • scale from strongly disagree to strongly agree
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Interval measurement & give an example

A

Common unity of measurement with no true zero

  • Temperature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Ratio measurement & give an example

A

Common measurement between each score & have true zero

  • height, weight, age, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Ways to Measure Reliability

A
  1. Test-Retest
  2. Internal Reliability/Consistency
  3. Split-Half Method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Test-Retest

A

Same measurement administered at 2 points in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Internal Reliability/Consistency

A

Consistency measuring multiple/all items it is meant to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Split-Half Method

A
  • 2 parallel forms to administered at same point in time
  • Correlation calculated b/w them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Internal Validity

A

Degree program caused change that was measured

  • Were changes in participants due to program or by chance?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

External Validity

A

Generalizability of results beyond participants

  • Would results be the same with different target population?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Threats to Construct Validity

A
  1. Inadequate explanation of constructs
  2. Construct cofounding
  3. Non-operation bias
  4. Mono-method bias
  5. Confounding constructs with levels of constructs
82
Q

What is the difference between qualitative & quantitative data?

A

Qualitative - describes what is occurring or why is occurring (non-numerically)

Quantitative - Numerical data that describes what is happening

83
Q

What processes can quantitative & qualitative data be useful?

A
  • Program planning
  • Implementation
  • Evaluation
84
Q

Considerations for data collection implementation

A
  1. Find reliable, trustworthy, & skilled people to collect, enter, analyze, & manage data
  2. Define roles, responsibilities, & skills needed
  3. Monitor data collection
  4. Maintain integrity of data collected
  5. Ensure protocols address quality control measures
85
Q

Standards/Steps of Evaluation

A
  1. Engage stakeholders
  2. Describe program
  3. Focus evaluation design
  4. Gather credible evidence
  5. Justify conclusions
  6. Ensure use & share lessons
86
Q

Unbounded vs Bounded Outcomes

A

Unbounded - possibility of existing before/after program

Bounded - outcomes can only occur once at particular time by a specific event/time

87
Q

Hawthorne Effect

A

If people in intervention group become sensitive to repeated introduction/removal of intervention

  • Threat to Internal Validity
88
Q

Social Desirability Effect

A

Bias that occurs when people answer questions in a way they think will make them seem favorable to others

  • Threat to internal validity
89
Q

Expectancy Effect

A

Occurs when researcher’s expectations influence results

90
Q

Placebo Effect

A

Individual’s health improves after taking fake treatment

  • In control group and they think they are receiving intervention
  • Threat to Internal Validity
91
Q

Face Validity

A

Common acceptance or belief that measure actually measures what it is supposed to measure

  • Expert decides if scale “appears” to measure construct
92
Q

Content Validity

A

Assesses whether test is representative of all aspects of construct

93
Q

Convergent Validity

A

Measure of same concept correlated to each other

  • type of construct validity
94
Q

Divergent Validity

A

Measure of construct does not correlate with other measures that should not be related to

95
Q

Criterion Validity

A

Measure correlates with outcome

96
Q

Predictive Validity

A

Assesses degree measure predicts criterion measure assessed at later time

97
Q

Concurrent Validity

A

Assesses degree measure correlates with an already validated measure

  • Type of Criterion Validity
  • constructs may be same or different
  • Related constructs
98
Q

Construct Validity

A

Whether specific measure of concept is associated with 1+ measures that is consistent with theoretically derived hypotheses

*- How accurately inferences about specific features of program reflect constructs
- Underlying theory is correct

99
Q

Discriminant Validity

A

measure of different concepts correlated to each other

  • type of construct validity
100
Q

Threats to Internal Validity

A
  1. Ambiguous Temporal Precedence
  2. History
  3. Maturation
  4. Testing
  5. Instrumentation
  6. Regression Artifacts
  7. Selection
  8. Attrition
  9. Expectancy threat
  10. Hawthorne effect
  11. Social desirability
  12. Placebo effect
101
Q

Ambiguous Temporal Precedence

A

Lack of clarity about whether treatment occurred

102
Q

History (threat to internal validity)

A

before-after changes is due to other factors in the environment rather than the program

103
Q

What is maturation?

A

before-after changes due to changes occurring inside people rather than program

104
Q

What is testing in regard to threat to internal validity?

A

Before-after changes due to giving pretest rather than program

105
Q

What is instrumentation in regard to threat to internal validity?

A

before-after changes due ot changes in the instrument or those administering instrument rather than program

106
Q

What is regression artifacts in regard to internal validity?

A

If subjects are selected on basis of their extreme score, before-after changes may be affected partly by extreme scores naturally shifting toward mean

107
Q

what is selection in regard to internal validity?

A

Difference between program & another group due to differences in people in the groups rather than the program

108
Q

What is attrition?

A

differences between program and another group due to loss of people from either or both groups rather than the program

109
Q

What is inadequate explanation of constructs in regard to construct validity?

A

Failure to adequately explicate construct may lead to incorrect inferences

110
Q

What is construct confounding in regard to construct validity?

A

Failure to define all constructs may result in incomplete construct inferences or confusion among constructs

111
Q

What is mon-operation bias?

A

Inferences are complicated when definition of construct both underrepresent construct of interest & measures irrelevant constructs

112
Q

What is mono-method bias?

A

When construct is measured using same method and method is part of the construct itself

113
Q

What are the field procedures for collecting data?

A
  • Protocols for scheduling initial contacts with respondents
  • Introducing instrument to respondent
  • Keeping track of individuals contacted
  • Follow up with non-respondents (when appropriate)
114
Q

How data is managed is dependent on what?

A
  • Type of data
  • How data is collected
  • How data is used throughout project lifestyle
115
Q

How does effective data management help evaluator/researcher?

A
  • Organization of data
  • Ability to access data
  • Analysis of data
  • Ensures quality of research
  • Supports published results
116
Q

Data Management Plan

A
  1. Procedures for transferring data from instruments to data analysis software
  2. Scoring guide to tell researcher/evaluation team how to code variables
117
Q

Data Analysis Plan

A
  • How data will be scored & coded
  • How missing data will be managed
  • How outliers will be handled
  • Data screening
118
Q

What does data screening (found in data analysis plan) allow/tell evaluator/researcher?

A
  • Assesses accuracy of data entry
  • How outliers & missing values will be handled
  • If statistical assumptions are met
119
Q

What does descriptive data describe?

A

Data that answers a questions

120
Q

What is descriptive data used for?

A

To decrease large quantity of data into few elemental measurements that entirely describe data distribution

121
Q

What measurements are used in descriptive data?

A

Frequency, mean, median, mode

122
Q

Bivariable Analysis

A

Determines whether variables in database are correlated with each other

  • Compares 2+ groups to see whether a characteristic is similar/different
  • Find out whether program outcomes are significantly different between 2 groups OR 1 group overtime (impact evaluation)
123
Q

Multivariable Analysis

A

Estimates size & direction of program’s effect in randomized & non-randomized study designs with treatment and control group

124
Q

Mediation Analysis

A

Identification of pathway between health promotion program, its impact on hypothesized psychosocial mediators, & its effects on behavioral outcomes

125
Q

Cost-Effective Analysis (CEA)

A

Determines differences between 2 programs based on what it costs for delivery of the programs

  • Relationship b/w program cost (input) & impact (output)
126
Q

How does cost-effective analysis help decision makers?

A

allocation of limited resources & still achieve desired health benefits

127
Q

What are the cost expenditure categories in CEA?

A
  1. Developmental
  2. Production
  3. Implementation
  4. Evaluation
128
Q

SWOT Analysis

A

Assesses internal & external environment

129
Q

Steps for Conducting CEA

A
  1. Define problem & objectives
  2. Identify alternatives
  3. Describe production relationships
  4. Define perspective/viewpoint of CEA
  5. Identify, measure, & value cost
  6. Identify & measure effectiveness
  7. Discount future costs & effectiveness
  8. Conduct sensitivity analysis
  9. Address equity issues
  10. Use CEA results in decision making
130
Q

What analyses provides information on program cost effectiveness?

A
  1. Cost-benefit Analysis (CBA)
  2. Cost-minimization Analysis (CMA)
  3. Cost-utility Analysis (CUA)
  4. Sensitivity Analysis
131
Q

_____________ & _____________ aid in program cost effectiveness (in addition to types of analyses).

A
  1. Cost effectiveness ratio
  2. Value threshold
132
Q

Program costs

A
  • Direct costs
  • Intervention costs
  • Indirect costs
  • Nonintervention costs
  • Cost savings vs future costs as result of program/implementation
133
Q

What are direct costs of programs?

A

All goods, services, & other resources used to deliver intervention

134
Q

What are indirect costs of programs?

A

Lost or impaired ability to work or engage in leisure activities as a direct result of intervention

135
Q

What are intervention costs of programs?

A

All resources used in delivery of intervention

136
Q

What are nonintervention costs of programs?

A

Resource costs that are part of intervention

137
Q

Cost savings vs Future costs of programs?

A

Cost Savings - savings tha occur from prevention or alleviation of disease

Future costs - costs of disease unrelated to intervention

138
Q

Program Benefits

A
  • Tangible benefits
  • Intangible benefits
  • Economic
  • Personal health
  • Social
139
Q

What are tangible benefits?

A

Benefits that are quantifiable & measurable

140
Q

What are intangible benefits?

A

Non-monetary, subjective, or difficult to measure gains attributable to program intervention

141
Q

What are problematic outliers?

A

Outliers not representative of population

142
Q

What are beneficial outliers?

A

Outliers that are representative of population

143
Q

What are multivariate outliers?

A

Unusual combinations of scores on different variables

144
Q

What is missing data?

A

Observations that were intended to be made but were not

145
Q

What should be used to guide data analysis?

A
  • Research/evaluation questions
  • Level of measurement of data
  • is it for research? evaluation?
146
Q

What can correlates (relationships b/w variables where on is affected/dependent on another) be derived from?

A
  • Reach & effectiveness
  • Size of effect
147
Q

What is included in evidence-based practice approach?

A
  • Combination of best available scientific evidence & data
  • Program planning frameworks
  • Community is engaged
  • Programmatic evaluation
  • Disseminated results
148
Q

Why must evidence be interpreted in regard to health programs?

A

Allows for determination of significance & drawing relevant inferences to plan future programs/interventions

149
Q

Evidence-based approach in findings & scientific evidence need to be incorporated into what areas of health programming?

A
  • Decision making
  • Policy development
  • Program implementation
150
Q

What do performance measures require?

A
  1. Object
  2. Standard - accepted level of performance expected
  3. Indicator - determines whether performance standard is achieved
  4. Measure - quantitative representation of capacity, process, or outcome
151
Q

What are delimitations?

A

Parameters or boundaries placed on study by researchers that help manage scope of study

152
Q

What are limitations?

A

Boundaries placed on study by factors or people other than researcher

153
Q

What are confounding variables?

A

Extraneous variables or factors outside scope of intervention that can impact results

154
Q

What types of systematic errors can occur with findings from evaluation/research?

A
  1. Sampling
  2. Design
  3. Implementation
  4. Analysis
155
Q

What are the research/evaluation errors HES should be able to identify?

A
  1. Sampling errors
  2. Lack of precision
  3. Variability of measurement
  4. Selection bias
  5. Instrumental bias
  6. Internal threats to validity
156
Q

What is a type II error?

A

Inferring program has no impact when it does (occurs when sample size is too small)

157
Q

What is a type III error?

A

Rejecting program as ineffective when program was never implemented as intended or technology flaws undermined program effectiveness

158
Q

What is a type IV error?

A
  • Evaluation is conducted for sake of evaluation
  • Questions are asked about program that no one cares about
  • Answers are of no interest to decision makers
159
Q

What is a type V error?

A
  • Reporting intervention has statistically effect but effect is too small
  • No significance to decision makers
160
Q

What can cause lesser approach (compared to most rigorous available) to be used in either research or evaluation?

A
  1. Ethics
  2. Cost
  3. Politics
  4. Availability of resources
161
Q

Types of Evidence (most to least rigorous)

A
  1. Systematic reviews & meta-analysis
  2. Scientific literature
  3. Public health surveillance data
  4. Program evaluations
  5. Reports from community members & other stakeholders (e.g. needs assessment)
162
Q

Types of Designs (most to least rigorous)

A
  1. Systematic reviews
  2. RCT
  3. Cohort
  4. Case-control
  5. Case series, case reports
  6. Editorials, expert opinion
163
Q

What are things to monitor/evaluate to ensure efficiency & effectiveness?

A
  1. Simplicity
  2. Flexibility
  3. Acceptability
  4. Sensitivity (proportion of disease)
  5. Predictive value positive
  6. Representativeness
  7. Timeliness
  8. Stability
164
Q

What is Conceptual Use?

A
  • Evaluations produce new information about what goes on in the program through answers to questions raised about a program
  • Reveals insights about program (what they think of the program, understanding the importance of program) in addressing underlying problem
165
Q

What is Instrumental Use?

A

Decision makers change program (expand to other sites, terminate, change how it is implemented) based on answers to evaluation questions

166
Q

What is Persuasive Use?

A

Evaluation results used to support or criticize program

167
Q

What is process use?

A

Engagement of designing & conducting evaluation that may lead to better understanding & new ways of thinking about the program

168
Q

What is external use?

A

Benefits decision makers & administrators not connected with program by considering program in different setting & how to change similar program that is not performing well

169
Q

Attributes for Evaluation Recommendations

A
  • Defensible
  • Timely
  • Realistic
  • Targeted
  • Simple
  • Specific
170
Q

Guidelines for Developing Recommendations

A
  1. Invest time
  2. Start early
  3. Consider all issues as fair game
  4. Case wide net
  5. Work closely with decision makers & program staff
  6. Decide whether recommendations should be as general or specific as possible
  7. Consider program context
  8. Consider program closure
  9. Describe expected benefits/costs
  10. Decide whether change should be incremental vs fundamental
  11. Avoid recommending another evaluation
171
Q

Implementation Documentation

A

Collecting data specified in process objectives carried out to demonstrate extent of program implementation to FUNDERS

172
Q

Implementation Assessment

A

Ongoing, nearly real-time activity of collecting data for purpose of making timely corrections or modifications to implementation through changes to elements of process theory

*AKA Program or Process Monitoring **

173
Q

What does implementation assessment provide?

A
  • Managerial guidance & oversight
  • Informs decision making to which aspects of organizational or service utilization plan are ineffective in accomplishing process objectives
174
Q

Implementation Evaluation

A

Comprehensive, retrospective determination of extent to which program was delivered as designed & whether variations may have held significant implications for effects of program

  • AKA Process evaluation *
175
Q

What is a purpose statement?

A
  • Tool to identify what is to be learned from evaluation and/or research
  • Serves to focus & steer collection & analysis of data
176
Q

What are evaluation questions designed to do?

A
  1. Designated boundaries for evaluation
  2. Determine what areas of program are the focus
177
Q

What are evaluation indicators of program?

A

Information or statistics that provide evidence of progress toward outcomes

178
Q

What are different types of evaluation indicators?

A
  1. Baseline
  2. Target
179
Q

What are baseline indicators?

A

Value of indicator prior to implementation

180
Q

What are target indicators?

A

Expected value of indicator at a specific point in time

181
Q

Where are evaluation indicators created from?

A

Logic Model

182
Q

What are the characteristics of indicators to ensure credibility?

A
  1. Clearly linked to intervention outcome
  2. Presented in specific, measurable terms
  3. Appropriate for population being served
  4. Feasible given data collection, resources, & skills
  5. Valid & reliable to stakeholders
183
Q

What should HES/researcher consider when choosing method for data collection?

A
  • Specifically target most important elements of study
  • Clearly prove or disprove hypothesis
  • Appropriate to scale of study
  • Do not cost too much or require too much time
184
Q

HIPPA

A

Federal regulations for protection of privacy of participant data

185
Q

What does HIPPA protect?

A

All information in health records, billing, & conversations among individuals & healthcare providers

186
Q

Security Rule

A
  • Establishes rules for safeguarding information
  • Requires IRB
  • Guidance provided by Code of Federal Regulations
187
Q

How long is data stored according to security rule?

A

5-10 years

188
Q

Ethical Standards of Participant Data

A
  1. Respect for autonomy
  2. Social justice
  3. Promotion of good & avoidance of harm
  4. Have evaluation/research plan that protects privacy of participants
  5. Participant data must be stored, utilized, & disclosed ensuring protection of participant privacy
189
Q

What needs to be considered when proposing possible explanations of findings?

A
  • Standards
    Analysis & Synthesis
  • Interpretation
  • Judgements
  • Recommendations
190
Q

When are evaluation findings justified?

A

When they are linked to evidence gathered & judged against agreed upon values or standards set by stakeholders

191
Q

Why is conducting meta-analysis important when synthesizing data?

A

Combination of results to answer research hypotheses

192
Q

Advantages of Meta-Analysis

A
  1. Ability to tell if results are more varied that expected
  2. Derived statistical testing of overall factors/effect size in related studies
  3. Potential generalization to population of studies
  4. Ability to control & use moderators to explain variations between studies
193
Q

What are 2 approaches to meta-analysis?

A

Vote-Counting & Classic (or Glassian) meta-analysis

194
Q

What is vote counting?

A

Defines findings as significantly positive/negative OR nonsignificant

195
Q

What is Classic or Glassian Meta-Analysis?

A
  • Defines questions to be examined
  • Collects studies
  • Codes study features & outcomes
  • Analyzes relations b/w study features & outcomes
196
Q

What data should be demonstrated/included when using it for policy analysis?

A
  • Burden of health of public
  • Priority over other issues
  • Pertinence at local level
  • Interventional benefits
  • Personalization of issue by using stories about how lives as impacted
  • Estimated intervention costs
197
Q

Limitations in Comparing Evaluation Results

A
  1. Examine & analyze data to look for patterns, recurring themes, similarities/differences
  2. Address patterns or lack of patterns that justify/don’t justify answers to evaluation questions
  3. Possible reasons for deviations in established patterns
  4. Study how patterns are supported/negated by previous studies or evaluations
198
Q

What are the parameters for addressing utilization of findings?

A
  • Study design
  • Preparation
  • Possible feedback
  • Follow through
  • Information distribution
  • Any possible further uses of information
199
Q

What aspects are included in an effective evaluation report?

A
  • Timely provision
  • Effective & detailed summary of how stakeholders were involved
  • List of strengths & limitations/weaknesses of findings
200
Q

what can Process Use provide?

A

Collaboration of different perspectives among interest groups