exam 1 Flashcards

1
Q

functions of government (5)

A
  1. protect their sovereignty
  2. preserve order
  3. provide services for their citizens
  4. socialize their citizens (especially their younger citizens) to be supportive of the system
  5. collect taxes from their citizens
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

3 reasons for studying public policy

A
  1. scientific understanding: better understand how the world works and the impacts public policy can have on people/society
  2. professional advice: have a practical application of the knowledge in public policy
  3. policy recommendations: help inform the people who are actually making the policy choices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Policy Process Model (6)

A
  1. problem identification: defining issues
  2. agenda setting: getting problems seriously considered by policymakers
  3. policy formulation: proposed policy actions (inactions) to address problems
  4. policy legitimation: providing legal force to decisions
  5. policy implementation: putting the policy into action
  6. policy evaluation: assessment of policy or program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Weiss’ definition of policy evaluation

A

the systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

outcome (summative) evaluation

A

concerned with the end results of the program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

process (formative) evaluation

A

focused not on the end results but the program in practice and procedure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Covert Purposes of Evaluations (4)

A
  1. postponement
  2. ducking responsibility
  3. window dressing
  4. public relations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

postponement

A

the initiator or client may be trying to delay a decision on a program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ducking responsibility

A

the client may be trying to have the evaluation make their decision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

window dressing

A

the client may be trying to disguise their decision with the evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

public relations

A

the client may be trying to gain support for the program through the evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

4 Unfavorable Conditions

A
  1. program is unclear and unstable
  2. participants are unsure about the purpose of the program
  3. initiators are trying to eyewash or whitewash the program
  4. evaluation has a lack of resources
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

program is unclear and unstable (2)

A
  1. there doesn’t seem to be much adherence to the goals
  2. since it’s unclear what the program actually is, it might be unclear what the evaluation is or what it means
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

participants are unsure about the purpose of the program

A

a process evaluation might be warranted to try and figure out what’s going on with the program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

initiators are trying to eyewash or whitewash the program (3)

A
  1. eyewash: attempting to justify a program by selecting certain aspects of an evaluation to look good
  2. whitewash: trying to cover up by avoiding any objective appraisal
  3. might not have the necessary information to complete an evaluation properly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

evaluation has a lack of resources

A

not just talking about money; time and people are also necessary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Whorely’s 3 Criteria of Evaluability Assessment

A
  1. the program should operate as intended
  2. it should be relatively stable
  3. it should seem to achieve positive results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ethics of evaluators (5 guiding principles)

A
  1. systematic inquiry
  2. competence
  3. integrity/honesty
  4. respect for people (treatment of people)
  5. responsibilities for general and public welfare
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

systematic inquiry

A

evaluators conduct systematic, data-based inquiries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

competence

A

evaluators provide a competent performance for stakeholders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

integrity/honesty

A

evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

respect for people

A

evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

responsibilities for general and public welfare

A

evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

barriers to ethical analysis (4)

A
  1. technocratic ethos: relies on the things that can be measured
  2. too many designs: frames the policy research at the expense of looking at other areas
  3. advocacy vs. analysis: your analysis and advocacy become blurred within the evaluation
  4. disturbing: don’t want to examine the results of the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Weiss’ 4 I’s of Stakeholders

A
  1. ideology:
  2. interests: each stakeholder has their own self-interest in the course of action
  3. information: stakeholders have different knowledge from their own experience/understanding of different reports
  4. institution: decisions are made in an organizational context
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

importance of program knowledge (5)

A
  1. to develop a good sense of the issues
  2. to formulate questions
  3. to understand and interpret the data
  4. to make sound recommendations
  5. for reporting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

3 main steps of planning an evaluation (Posavac)

A
  1. identify the program and its stakeholders: who are the program personnel, sponsors, and who is being served by the program?
  2. become familiar with information needs: who wants the evaluation?
  3. plan the evaluation: examine the literature, plan the methodology, and present a written proposal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Whorley’s 5 categories of evaluation questions

A
  1. program process
  2. program outcome
  3. attributing outcomes to the program
  4. links between processes and outcomes
  5. explanations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

program process questions (3)

A
  1. questions that are trying to understand what is going on in the program
  2. aligned to the program’s design
  3. might be more open
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

program outcome questions (3)

A
  1. questions focused on the impact of the program
  2. designed on the client’s situation
  3. these questions normally come from the program’s goals or various stakeholders
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

attributing outcomes to the program questions (2)

A
  1. questions that clearly show that changes observed are due to the program
  2. trying to understand the extent to which the program was responsible for changes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

links between processes and outcomes questions

A

questions focused on what processes or features of a program are related to different outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

explanations questions

A

questions designed to help understand why the program achieved its results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

sources of information of data collection (7)

A
  1. informal interviews
  2. observations
  3. formal interviews
  4. written questionnaires
  5. program records
  6. data from other institutions
  7. other sources (focus groups, testing, more documents)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

4 general levels of measurement

A
  1. nominal (special case dichotomous)
  2. ordinal
  3. interval
  4. ratio
36
Q

program outcome measures in terms of effects on (4)

A
  1. people served: what are the changes in attitudes, values, behavior, and skills
  2. agencies: is there a change in the agency or institution
  3. a larger system: is there a change in the network of agencies or community
  4. the public: changes in the public’s views, attitudes, and perceptions
37
Q

2 components of public interest

A
  1. there is the instrumental achievement of a particular set of objectives that represent widely shared interests
  2. there is the adherence to a set of procedures that, if followed in selecting and pursuing objectives by society, yields an acceptable outcome for the group
38
Q

instrumental rationality

A

program evaluation steadfastly acts on the basis that rationally connecting objectives, means, and outcomes can improve the outcomes and, consequently, the public interest

39
Q

Emison’s key points for the Client (5)

A
  1. know what your client’s interests are
  2. know what success it
  3. have a principal who can do something
  4. put the evaluation in a management context
  5. do the right thing
40
Q

Emison’s 4 C’s

A
  1. client
  2. content
  3. control
  4. communication
41
Q

client (3)

A
  1. the client provides the purpose for the conduct of the evaluation
  2. the key to success is understanding our client
  3. understanding the client’s interests is essential to having a program evaluation that goes beyond study to action
42
Q

content (2)

A
  1. governs the substantive nature of the evaluation
  2. understand how the important features of the program work
43
Q

control (2)

A
  1. successful program evaluations are managed well
  2. controlling the analysis from the outset can ensure that a product will emerge for the client to consider
44
Q

communication

A

we must communicate the evaluation in a manner that makes it easy for the client to understand the content, agree with the conclusions, and direct that actions be taken

45
Q

Emison’s key points for Content (5)

A
  1. build the analysis on facts
  2. align your evidence and your conclusions
  3. simplicity always trumps elegance
  4. don’t let the illusion of the perfect drive out the reality of the good
  5. never underestimate the power of accurate description
46
Q

validity

A

how well does the indicator measure the concept; allows the evaluator to have more confidence in what is being measured

47
Q

reliability

A

if you were to repeatedly measure this concept, would you get the same results?

48
Q

characteristics of measurement (7)

A
  1. validity
  2. reliability
  3. direction
  4. variance/sensitivity to differences
  5. currency/salience
  6. access
  7. bias of the data collection
49
Q

possible stakeholders and their motivations in an evaluation report

A
  1. legislature - influence program direction, gain knowledge of program, lay basis for long-term change
  2. political executive - influence program direction, justify prior decision, draw attention to program
  3. program manager - improve operational program, justify prior decision, draw attention to program
  4. program client - influence program direction, lay basis for long-term change
50
Q

program theory (3)

A
  1. can be seen as preceding and then evolving and expanding into the Theory of Change, which is more relational and holistic
  2. it emerged from the need to better understand programs’ rationale and, more importantly, the chain of causality that lead to its outcome(s)
  3. the assumption is that there is a logic that leads to the achievement(s) and that understanding this logic is paramount to understanding the success and failure of the program
51
Q

main characteristics of theories of change (4)

A
  1. logical thinking and critical reflection
  2. flexibility and openness
  3. innovation and potential improvement in programs
  4. performance management
52
Q

advantages of a theory of change (5)

A
  1. ownership
  2. relevancy
  3. focus
  4. value for money
  5. measurement
53
Q

TOC advantage: ownership (2)

A
  1. the TOC provides a unique moment of stakeholder participation where all those who have a stake in the program can meaningfully contribute to it from the design and conceptualization stage
  2. increasing ownership increases commitment and collective synergies and hence the program’s overall chances of success
54
Q

TOC advantage: relevancy

A

planning with others and with an ear firmly to the ground is hugely helpful in ensuring that the program meets the needs of its targets in context and will, therefore, be relevant

55
Q

TOC advantage: focus (2)

A
  1. a TOC planning process begins with a definition of the desired change which means that everything thereof will be defined and decided per reference to this change
  2. the desired change is the focus, and the focus determines the means
56
Q

TOC advantage: value for money (2)

A
  1. programs that do not think through the elements that a TOC often ‘forces’ a planning process to go through can have less value for money
  2. the lack of focus on the ultimate change may lead to the implementation of multiple activities that in the end are a distraction and do not contribute to generate impact
57
Q

TOC advantage: measurement (2)

A
  1. it helps evaluators ensure that they are measuring the right activities and that they have developed appropriate research tools
  2. articulating a theory of change at the outset and gaining agreement on it by all stakeholders reduces, but does not eliminate, the problems associated with causal attribution of impact
58
Q

key elements of an evaluation plan (10)

A
  1. introduction and background to the program
  2. a summary of relevant, previous evaluations: their findings and the methodologies they employed
  3. evaluation questions
  4. overall evaluation design
  5. methods
  6. ethics
  7. timescales
  8. main outputs
  9. project management
  10. the evaluator(s)
59
Q

good evaluation questions must be: (3)

A
  1. reasonable and appropriate
  2. answerable
  3. contain the criteria for program performance
60
Q

principles related to participants’ rights (3)

A
  1. voluntary participation
  2. do no harm
  3. confidentiality and anonymity
61
Q

participants’ rights: voluntary participation (2)

A
  1. participants must willingly participate in the evaluations, namely in the workshops, interviews, focus groups, and all the other situations by which data and information are to be collected
  2. the option not to participate should be made clear and available to them as an equally valid and respected option
62
Q

participants’ rights: do no harm (2)

A
  1. participants, contributors, and evaluation stakeholders more broadly will incur no harm if and when they decide to participate
  2. this principle needs to be mainstreamed throughout and influence the overall evaluation rationale and process - from the selection of the methods to the actual implementation of these
63
Q

participants’ rights: confidentiality and anonymity (2)

A
  1. regardless of the information they provide, they will not be identified as the source
  2. no statements or other type of information that may identify participants should be shared with others or publicly displayed by the evaluator
64
Q

types of research design (4)

A
  1. informal study design (self-evaluation & expert judgement)
  2. formal study design
  3. quasi-experimental designs (one-group design, extensions of the one-group design, comparison group studies)
  4. experimental design
65
Q

8 threats to internal validity

A
  1. history: events could happen between measurements
  2. maturation: participants change and age
  3. testing: Hawthorne effect
  4. instrumentation: changes in measurement (different observers may observe in different ways)
  5. regression: may be measuring at an extreme position (wouldn’t know with only one measurement)
  6. selection: introduce bias by selecting of groups
  7. mortality: loss of participants along the way
  8. interaction of these factors
66
Q

history of evaluation research: pre WWII (3)

A
  1. 1912: comparison between autopsies and medical diagnoses
  2. 1933: 8 year study - outcomes of students in traditional vs. progressive schools
  3. studies done by academics and interest groups
67
Q

history of evaluation research: post WWII

A

a slew of social scientists hired by the government under FDR after the Great Depression; government began to participate in evaluation research

68
Q

history of evaluation research: war on poverty (3)

A
  1. LBJ - avalanche of social programs
  2. large-scale government-funded evaluations
  3. federal government funding of social programs, requiring systematic evaluation to know how money is being spent (ex: Elementary Secondary Education Act (ESEA))
69
Q

history of evaluation research: development of the field (3)

A
  1. late 60s-early 70s: evaluation as its own field, government officials, use of cost benefit analysis
  2. by end of 70s: at federal level, evaluation is pretty commonplace, basically every cabinet office has its own evaluation office
  3. institutionalization: university research centers devoted to evaluation; for-profit enterprises
70
Q

history of evaluation research: Reagan administration (2)

A
  1. cuts funding to social services –> fewer evaluations
  2. evaluations sponsored by the federal government focused on cost reduction and eliminating services
71
Q

history of evaluation research: Clinton administration (2)

A
  1. renewed emphasis on evaluation –> focus on effectiveness returned
  2. conservatives emphasize efficiency and ability to reduce size of government/services; liberals emphasize effectiveness and improving situations
72
Q

significance of the Government Performance and Results Act (1993) to evaluation (4)

A
  1. Clinton administration - requires that federal agencies have performance measurements/targets (has now trickled down to state agencies); accountability of program/using resources
  2. many are able to make a career out of being an evaluator
  3. internationalization: more international organizations established to formulate ethics of evaluation research
  4. diversifying: more diverse evaluators, non-profit organizations, lack of training
73
Q

Emison’s purposes of evaluation (2)

A
  1. to advance the public interest
  2. advance achievement of public objectives while observing appropriate procedures within a democratic society
74
Q

traditional role of evaluators (6)

A
  1. objective outsider
  2. fact-seeking
  3. valid report on program
  4. carefully avoid bias
  5. methodology focus (absolute neutrality is not possible)
  6. recommendations based on data analysis, not from a stakeholder perspective
75
Q

different roles of an evaluator

A

objective outsider (detached, traditional) vs. co-investigator (participative)

76
Q

empowerment evaluator (2)

A
  1. late 80s-90s, stakeholders are in charge of the evaluation
  2. evaluators offering help from the sidelines
77
Q

collaborative evaluator (4)

A
  1. most common
  2. critical friend (joint venture between practitioner and evaluator)
  3. evaluator brings research skills to the table while program personnel bring knowledge of the program
  4. evaluator doesn’t make recommendations but encourages practitioners to reflect on the data
78
Q

stakeholder evaluator (2)

A
  1. convenes various stakeholders, structured engagement with stakeholders
  2. evaluator is in charge of the study, but is seeking stakeholder input for recommendations
79
Q

exogenous factors and ethics per Fox, Grimm, & Caldeira (3)

A
  1. literacy level
  2. power relations
  3. intercultural communication
80
Q

considerations in developing program process measures (8)

A

areas to consider:
1. types of programs offered
2. characteristics of the staff and clients
3. frequency of service
4. duration of service
5. intensity
6. size of group receiving service
7. stability of the service offered
8. quality of the service

81
Q

considerations in developing program input measures (7)

A

raw materials of the program include:
1. budget
2. nature of the staff
3. location
4. plan of activities
5. methods of service
6. purposes
7. client eligibility standards

82
Q

reasons experimental design is considered the hallmark of scientific research (2)

A
  1. avoids many issues of internal validity
  2. allows the researcher to establish a strong causal relationship
83
Q

problems with randomization of assignment of groups (4)

A
  1. refusal to participate
  2. nonattendance
  3. attrition: may leave program, death
  4. outside interference
84
Q

external validity (2)

A
  1. whether results from the evaluation can be generalized to other situations
  2. more formally, it is the validity with which we can infer that a causal relationship which we observe during the evaluation can be generalized across different types of persons, settings, and times
85
Q

internal validity (2)

A
  1. whether the evaluation can demonstrate plausibly a causal relationship between the treatment and the outcome
  2. in other words, is the relationship between an independent and dependent variable a causal relationship?
86
Q

importance of formality and rigor in evaluative information (4)

A
  1. evaluation is a formal and rigorous process (separates it from informal evaluation)
  2. informal evaluations by the people running the program tend to be overly optimistic
  3. bringing rationality to policymaking
  4. market situations: performance is evaluated by the market