exam 1 Flashcards
functions of government (5)
- protect their sovereignty
- preserve order
- provide services for their citizens
- socialize their citizens (especially their younger citizens) to be supportive of the system
- collect taxes from their citizens
3 reasons for studying public policy
- scientific understanding: better understand how the world works and the impacts public policy can have on people/society
- professional advice: have a practical application of the knowledge in public policy
- policy recommendations: help inform the people who are actually making the policy choices
Policy Process Model (6)
- problem identification: defining issues
- agenda setting: getting problems seriously considered by policymakers
- policy formulation: proposed policy actions (inactions) to address problems
- policy legitimation: providing legal force to decisions
- policy implementation: putting the policy into action
- policy evaluation: assessment of policy or program
Weiss’ definition of policy evaluation
the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy
outcome (summative) evaluation
concerned with the end results of the program
process (formative) evaluation
focused not on the end results but the program in practice and procedure
Covert Purposes of Evaluations (4)
- postponement
- ducking responsibility
- window dressing
- public relations
postponement
the initiator or client may be trying to delay a decision on a program
ducking responsibility
the client may be trying to have the evaluation make their decision
window dressing
the client may be trying to disguise their decision with the evaluation
public relations
the client may be trying to gain support for the program through the evaluation
4 Unfavorable Conditions
- program is unclear and unstable
- participants are unsure about the purpose of the program
- initiators are trying to eyewash or whitewash the program
- evaluation has a lack of resources
program is unclear and unstable (2)
- there doesn’t seem to be much adherence to the goals
- since it’s unclear what the program actually is, it might be unclear what the evaluation is or what it means
participants are unsure about the purpose of the program
a process evaluation might be warranted to try and figure out what’s going on with the program
initiators are trying to eyewash or whitewash the program (3)
- eyewash: attempting to justify a program by selecting certain aspects of an evaluation to look good
- whitewash: trying to cover up by avoiding any objective appraisal
- might not have the necessary information to complete an evaluation properly
evaluation has a lack of resources
not just talking about money; time and people are also necessary
Whorely’s 3 Criteria of Evaluability Assessment
- the program should operate as intended
- it should be relatively stable
- it should seem to achieve positive results
ethics of evaluators (5 guiding principles)
- systematic inquiry
- competence
- integrity/honesty
- respect for people (treatment of people)
- responsibilities for general and public welfare
systematic inquiry
evaluators conduct systematic, data-based inquiries
competence
evaluators provide a competent performance for stakeholders
integrity/honesty
evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process
respect for people
evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders
responsibilities for general and public welfare
evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation
barriers to ethical analysis (4)
- technocratic ethos: relies on the things that can be measured
- too many designs: frames the policy research at the expense of looking at other areas
- advocacy vs. analysis: your analysis and advocacy become blurred within the evaluation
- disturbing: don’t want to examine the results of the program
Weiss’ 4 I’s of Stakeholders
- ideology:
- interests: each stakeholder has their own self-interest in the course of action
- information: stakeholders have different knowledge from their own experience/understanding of different reports
- institution: decisions are made in an organizational context
importance of program knowledge (5)
- to develop a good sense of the issues
- to formulate questions
- to understand and interpret the data
- to make sound recommendations
- for reporting
3 main steps of planning an evaluation (Posavac)
- identify the program and its stakeholders: who are the program personnel, sponsors, and who is being served by the program?
- become familiar with information needs: who wants the evaluation?
- plan the evaluation: examine the literature, plan the methodology, and present a written proposal
Whorley’s 5 categories of evaluation questions
- program process
- program outcome
- attributing outcomes to the program
- links between processes and outcomes
- explanations
program process questions (3)
- questions that are trying to understand what is going on in the program
- aligned to the program’s design
- might be more open
program outcome questions (3)
- questions focused on the impact of the program
- designed on the client’s situation
- these questions normally come from the program’s goals or various stakeholders
attributing outcomes to the program questions (2)
- questions that clearly show that changes observed are due to the program
- trying to understand the extent to which the program was responsible for changes
links between processes and outcomes questions
questions focused on what processes or features of a program are related to different outcomes
explanations questions
questions designed to help understand why the program achieved its results
sources of information of data collection (7)
- informal interviews
- observations
- formal interviews
- written questionnaires
- program records
- data from other institutions
- other sources (focus groups, testing, more documents)
4 general levels of measurement
- nominal (special case dichotomous)
- ordinal
- interval
- ratio
program outcome measures in terms of effects on (4)
- people served: what are the changes in attitudes, values, behavior, and skills
- agencies: is there a change in the agency or institution
- a larger system: is there a change in the network of agencies or community
- the public: changes in the public’s views, attitudes, and perceptions
2 components of public interest
- there is the instrumental achievement of a particular set of objectives that represent widely shared interests
- there is the adherence to a set of procedures that, if followed in selecting and pursuing objectives by society, yields an acceptable outcome for the group
instrumental rationality
program evaluation steadfastly acts on the basis that rationally connecting objectives, means, and outcomes can improve the outcomes and, consequently, the public interest
Emison’s key points for the Client (5)
- know what your client’s interests are
- know what success it
- have a principal who can do something
- put the evaluation in a management context
- do the right thing
Emison’s 4 C’s
- client
- content
- control
- communication
client (3)
- the client provides the purpose for the conduct of the evaluation
- the key to success is understanding our client
- understanding the client’s interests is essential to having a program evaluation that goes beyond study to action
content (2)
- governs the substantive nature of the evaluation
- understand how the important features of the program work
control (2)
- successful program evaluations are managed well
- controlling the analysis from the outset can ensure that a product will emerge for the client to consider
communication
we must communicate the evaluation in a manner that makes it easy for the client to understand the content, agree with the conclusions, and direct that actions be taken
Emison’s key points for Content (5)
- build the analysis on facts
- align your evidence and your conclusions
- simplicity always trumps elegance
- don’t let the illusion of the perfect drive out the reality of the good
- never underestimate the power of accurate description
validity
how well does the indicator measure the concept; allows the evaluator to have more confidence in what is being measured
reliability
if you were to repeatedly measure this concept, would you get the same results?
characteristics of measurement (7)
- validity
- reliability
- direction
- variance/sensitivity to differences
- currency/salience
- access
- bias of the data collection
possible stakeholders and their motivations in an evaluation report
- legislature - influence program direction, gain knowledge of program, lay basis for long-term change
- political executive - influence program direction, justify prior decision, draw attention to program
- program manager - improve operational program, justify prior decision, draw attention to program
- program client - influence program direction, lay basis for long-term change
program theory (3)
- can be seen as preceding and then evolving and expanding into the Theory of Change, which is more relational and holistic
- it emerged from the need to better understand programs’ rationale and, more importantly, the chain of causality that lead to its outcome(s)
- the assumption is that there is a logic that leads to the achievement(s) and that understanding this logic is paramount to understanding the success and failure of the program
main characteristics of theories of change (4)
- logical thinking and critical reflection
- flexibility and openness
- innovation and potential improvement in programs
- performance management
advantages of a theory of change (5)
- ownership
- relevancy
- focus
- value for money
- measurement
TOC advantage: ownership (2)
- the TOC provides a unique moment of stakeholder participation where all those who have a stake in the program can meaningfully contribute to it from the design and conceptualization stage
- increasing ownership increases commitment and collective synergies and hence the program’s overall chances of success
TOC advantage: relevancy
planning with others and with an ear firmly to the ground is hugely helpful in ensuring that the program meets the needs of its targets in context and will, therefore, be relevant
TOC advantage: focus (2)
- a TOC planning process begins with a definition of the desired change which means that everything thereof will be defined and decided per reference to this change
- the desired change is the focus, and the focus determines the means
TOC advantage: value for money (2)
- programs that do not think through the elements that a TOC often ‘forces’ a planning process to go through can have less value for money
- the lack of focus on the ultimate change may lead to the implementation of multiple activities that in the end are a distraction and do not contribute to generate impact
TOC advantage: measurement (2)
- it helps evaluators ensure that they are measuring the right activities and that they have developed appropriate research tools
- articulating a theory of change at the outset and gaining agreement on it by all stakeholders reduces, but does not eliminate, the problems associated with causal attribution of impact
key elements of an evaluation plan (10)
- introduction and background to the program
- a summary of relevant, previous evaluations: their findings and the methodologies they employed
- evaluation questions
- overall evaluation design
- methods
- ethics
- timescales
- main outputs
- project management
- the evaluator(s)
good evaluation questions must be: (3)
- reasonable and appropriate
- answerable
- contain the criteria for program performance
principles related to participants’ rights (3)
- voluntary participation
- do no harm
- confidentiality and anonymity
participants’ rights: voluntary participation (2)
- participants must willingly participate in the evaluations, namely in the workshops, interviews, focus groups, and all the other situations by which data and information are to be collected
- the option not to participate should be made clear and available to them as an equally valid and respected option
participants’ rights: do no harm (2)
- participants, contributors, and evaluation stakeholders more broadly will incur no harm if and when they decide to participate
- this principle needs to be mainstreamed throughout and influence the overall evaluation rationale and process - from the selection of the methods to the actual implementation of these
participants’ rights: confidentiality and anonymity (2)
- regardless of the information they provide, they will not be identified as the source
- no statements or other type of information that may identify participants should be shared with others or publicly displayed by the evaluator
types of research design (4)
- informal study design (self-evaluation & expert judgement)
- formal study design
- quasi-experimental designs (one-group design, extensions of the one-group design, comparison group studies)
- experimental design
8 threats to internal validity
- history: events could happen between measurements
- maturation: participants change and age
- testing: Hawthorne effect
- instrumentation: changes in measurement (different observers may observe in different ways)
- regression: may be measuring at an extreme position (wouldn’t know with only one measurement)
- selection: introduce bias by selecting of groups
- mortality: loss of participants along the way
- interaction of these factors
history of evaluation research: pre WWII (3)
- 1912: comparison between autopsies and medical diagnoses
- 1933: 8 year study - outcomes of students in traditional vs. progressive schools
- studies done by academics and interest groups
history of evaluation research: post WWII
a slew of social scientists hired by the government under FDR after the Great Depression; government began to participate in evaluation research
history of evaluation research: war on poverty (3)
- LBJ - avalanche of social programs
- large-scale government-funded evaluations
- federal government funding of social programs, requiring systematic evaluation to know how money is being spent (ex: Elementary Secondary Education Act (ESEA))
history of evaluation research: development of the field (3)
- late 60s-early 70s: evaluation as its own field, government officials, use of cost benefit analysis
- by end of 70s: at federal level, evaluation is pretty commonplace, basically every cabinet office has its own evaluation office
- institutionalization: university research centers devoted to evaluation; for-profit enterprises
history of evaluation research: Reagan administration (2)
- cuts funding to social services –> fewer evaluations
- evaluations sponsored by the federal government focused on cost reduction and eliminating services
history of evaluation research: Clinton administration (2)
- renewed emphasis on evaluation –> focus on effectiveness returned
- conservatives emphasize efficiency and ability to reduce size of government/services; liberals emphasize effectiveness and improving situations
significance of the Government Performance and Results Act (1993) to evaluation (4)
- Clinton administration - requires that federal agencies have performance measurements/targets (has now trickled down to state agencies); accountability of program/using resources
- many are able to make a career out of being an evaluator
- internationalization: more international organizations established to formulate ethics of evaluation research
- diversifying: more diverse evaluators, non-profit organizations, lack of training
Emison’s purposes of evaluation (2)
- to advance the public interest
- advance achievement of public objectives while observing appropriate procedures within a democratic society
traditional role of evaluators (6)
- objective outsider
- fact-seeking
- valid report on program
- carefully avoid bias
- methodology focus (absolute neutrality is not possible)
- recommendations based on data analysis, not from a stakeholder perspective
different roles of an evaluator
objective outsider (detached, traditional) vs. co-investigator (participative)
empowerment evaluator (2)
- late 80s-90s, stakeholders are in charge of the evaluation
- evaluators offering help from the sidelines
collaborative evaluator (4)
- most common
- critical friend (joint venture between practitioner and evaluator)
- evaluator brings research skills to the table while program personnel bring knowledge of the program
- evaluator doesn’t make recommendations but encourages practitioners to reflect on the data
stakeholder evaluator (2)
- convenes various stakeholders, structured engagement with stakeholders
- evaluator is in charge of the study, but is seeking stakeholder input for recommendations
exogenous factors and ethics per Fox, Grimm, & Caldeira (3)
- literacy level
- power relations
- intercultural communication
considerations in developing program process measures (8)
areas to consider:
1. types of programs offered
2. characteristics of the staff and clients
3. frequency of service
4. duration of service
5. intensity
6. size of group receiving service
7. stability of the service offered
8. quality of the service
considerations in developing program input measures (7)
raw materials of the program include:
1. budget
2. nature of the staff
3. location
4. plan of activities
5. methods of service
6. purposes
7. client eligibility standards
reasons experimental design is considered the hallmark of scientific research (2)
- avoids many issues of internal validity
- allows the researcher to establish a strong causal relationship
problems with randomization of assignment of groups (4)
- refusal to participate
- nonattendance
- attrition: may leave program, death
- outside interference
external validity (2)
- whether results from the evaluation can be generalized to other situations
- more formally, it is the validity with which we can infer that a causal relationship which we observe during the evaluation can be generalized across different types of persons, settings, and times
internal validity (2)
- whether the evaluation can demonstrate plausibly a causal relationship between the treatment and the outcome
- in other words, is the relationship between an independent and dependent variable a causal relationship?
importance of formality and rigor in evaluative information (4)
- evaluation is a formal and rigorous process (separates it from informal evaluation)
- informal evaluations by the people running the program tend to be overly optimistic
- bringing rationality to policymaking
- market situations: performance is evaluated by the market