Exam 1 Flashcards
Evaluation Chapters 1-3
Program evaluation
Application of social research methods to systematically investigate the effectiveness of social intervention programs in ways that are adapted to their political and organizational environments and are designed to inform social action to improve social conditions
Main tasks of program evaluation
Assess the effectiveness of social programs
Identify the factors that drive or undermine their effectiveness
Confirmation bias
The tendency to see things in ways that favor preexisting beliefs
Relativity of program effects
With rare exceptions, some program participants will show improvement on the outcomes the program targets, depending on the focus of the program. But that does not necessarily mean these gains were caused by participation in the program. Improvement for at least some individuals is quite likely to have occurred anyway in the natural course of events even without the help of the program
The two arms of evaluation
Description of program performance
Standards or criteria for judgement of program performance
Five domains of Evaluation
Need for the program
Program theory and design
Program process
Program impact
Program efficiency
Needs Assessment
First step in planning a new program
Used to systematically describe and diagnose social needs
Needs assessment may also be appropriate to examine whether an established program is responsive to the current needs of its target population and provide guidance for improvement.
Also looks at the extent of an issue.
Assessment of Program Theory and Design
Must reflect valid assumptions about the nature of the problem
Must represent a feasible approach
Often in the form of a logic model
Assessment of Program Process
Assessment of program process, evaluates the fidelity and quality of a program’s implementation.
May be done as a freestanding evaluation of the activities and operations of the program.
Program monitoring
When the process evaluation is an ongoing function that occurs regularly, it will usually be referred to as program monitoring.
may also include information about the status of program participants on targeted outcomes after they have completed the program and thus also include outcome monitoring.
Effectiveness of the Program: Impact Evaluation
Gauged by the change it produces in outcomes (EG: new behavior or mindset)
Asks whether the desired outcomes were actually attained.
depend in large part on whether it adequately operationalizes and implements an effective theory
Cost Analysis and Efficiency Assessment
Cost analysis
Efficiency assessment
Cost- benefit or cost-effectiveness analysis
Asks if a program can be done cheaper
Implementation failure
when the effects are null or weak because the program activities assumed necessary to bring about the desired improvements did not actually occur as intended
Theory failure
When the program conceptualization and design are not capable of generating the desired outcomes no matter how well implemented
Evaluation Sponsor
Person who commissions evaluation (Jeanna Somm)
Stakeholders
Individuals, groups, or organizations with significant interest in program (Carlisle community members and those involved with the program). All those potentially affected.
Formative Evaluation
Intended to improve a program
Summative Evaluation
Intended to make a summary judgment of a program performance usually to determine if the program should be:
Discontinued
Changed
Continued
disseminated/expanded
Types of steak holder-evaluator relationships
Independent evaluation
Participatory or collaborative evaluation
Empowerment evaluation
Independent evaluation
Evaluator solely evaluating
Participatory or collaborative evaluation
Evaluation sponsor, staff, and program work more closely with evaluator to develop questions and means of evaluation. Also allows for feedback.
Empowerment evaluation
Initial participatory with the intention of teaching evaluation to stakeholders
Cost-benefit and cost-efficiency
Efficiency assessments may take the form of cost- benefit analysis or cost-effectiveness analysis, asking, respectively, whether a program produces sufficient benefits in relation to its costs and whether other interventions or delivery systems can produce the benefits at a lower cost.
Incidence
Number of new instances of a particular problem in a specified area or context during a specified time
Prevalence
Total number of existing cases in that area at a specified time
Rate
The occurrence or existence of a particular condition expressed as a proportion of units in the relevant population (e.g., deaths per 1,000 adults)
Probability sampling
characteristics of the sample can be used to estimate the characteristics of the full population
A social indicator
A periodic measurement designed to track the course of a social condition over time
Forecasting Needs
Can estimate the magnitude of a social problem in the future
Forecasting of future trends can be risky as factors can interrupt outcomes
population at risk
Those persons or units with a significant probability of experiencing or developing the condition to which the program is designed to respond. Thus, the population at risk in birth control programs is usually defined as women of childbearing age.
Population in need
identified through direct assessments of their condition. For instance, there are reliable and valid literacy tests that can be used to identify functionally illiterate persons who constitute the population in need for adult literacy programs.
Why Qualitative Methods for Describing Needs.
Useful for obtaining detailed, textured knowledge of the specific needs in question. Specifies how to relatively address the need for the target population.
Can range from interviews of a few persons to elaborate and detailed ethnographic research.
What Qualitative Methods for Describing Needs.
Focus groups.
Snowball sampling.
Types of program theories
Articulated program theory
Implicit program theory
Articulated Program Theory
Specific
Based on behavior or behavior change theory
Social science based
(CASEL SEL)
Implicit Program Theory
When the underlying assumptions about how program services and
practices are presumed to accomplish their purposes have not been fully
articulated and recorded (SPY SEL)
Evaluability assessments
Can this program be evaluated?
What are they doing and what the have to accomplish?
Goals and objectives are good?
Say a program wants to develop grit. But What is grit? How to you develop it? Measure?
Must assess stakeholders. Do they want evaluation? Will they actually use data?
Three Ways to Describe a Program
Impact theory
Service utilization plan
Organization plan
Impact theory (what we are doing for SPY)
Explains a causal theory
Mechanisms of change
EG: SPY programming prevents summer slide
Impact Theory Failure
Assuming that mechanisms are false
EG Scared straight
Does the research literature support this program theory? SPY is supported, but SS is not.
Service utilization plan
Service audit
How much of what services need active mechanisms to change
Expressed through point of view of service target
What will happen to a student who joins SPY?
Organization plan
What resources are needed on the front line and behind the scenes?
What kind of staff? Training? Experience?
EG: SPY gives SEL training
Expressed through view point of management
Eliciting Program Theory
Define program boundaries
Specify goals and objectives
List program functions, components and activities
Develop a flow chart that links logic of what the program does to what the program hopes to accomplish
Assessing Program Theory
Does program match needs assessment?
Is the program logical?
Does program match what we know from research?
Black box evaluation
If program process theory is also poorly specified, it will not even be possible to adequately describe the nature of the program that produced, or failed to produce, the outcomes of interest.