Midterm Flashcards

1
Q

Difference between Program Theory and Logic Model

A

In simple terms, a logic model is a picture of your theory—a drawing that shows how one thing leads to the next, like a flow chart. A logic model uses short phrases to represent things that you explain in more detail in the program theory. While a logic model can just use an arrow to show that one thing leads to the next, your program theory needs to lay out the evidence to show why you believe one thing will lead to the next. A logic model is one commonly used tool for illustrating an underlying program theory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Program Evaluation

A

The process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program’s planning implementation, and/or effectiveness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Formative vs. Summative Evaluation

A

Formative evaluation is evaluation designed, done, and intended to support the process of improvement; normally commissioned or done, and deliver to someone who can make that improvement. Summative evaluation is the rest of evaluation; in terms of intentions, it is evaluation done for, or by, any observers or decision makers (by contrast to developers) who need evaluative conclusions for any other reasons besides development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Four Basic Evaluation Types

A
  1. Constructive Process Evaluation
  2. Conclusive Evaluation
  3. Constructive Outcome Evaluation
  4. Conclusive Outcome Evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Constructive Process Evaluation

A

provides information about the relative strengths weaknesses of the program’s structure or implementation processes, with the purpose of program improvement. Does not provide an overall assessment of the success or failure of program implementation. (ex: identify which program elements are best)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conclusive Process Evaluation

A

conducted to judge the merits of the implementation process; attempts to judge whether the implementation of a program is a success or failure, appropriate or inappropriate. (ex: are services being provided to the target population?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Constructive Outcome Evaluation

A

identifies the relative strengths and weaknesses of program elements in terms of how they may affect program outcomes. Can improve degree of how a program achieves its goals, but does not make judgement about overall program effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Conclusive Outcome Evaluation

A

provides an overall judgement of a program in terms of its merit or worth. Synonymous with summarize evaluation. (ex: determine whether changes in outcomes can be causally linked to program intervention.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Scientific vs Stakeholder Credibility

A

Scientific credibility reflects the extent to which that evaluation was governed by scientific principles. Stakeholder credibility is the extent to which stakeholders believe the evaluation’s design gives serious consideration to their views, concerns and needs. Evaluation is both a science and an art, so it must strike a balance between scientific and stakeholder credibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Two requirements for evaluation evidence

A

Two requirements: 1) scientific requirement: evaluative evidence must be credible. 2) stakeholder requirement: evidence must respond to the stakeholders’ views, needs, and practices, so as to be useful; stakeholders are consumers of evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Integrated evaluation perspective

A

This is a synthesis of the scientific and responsiveness requirements which does not prioritize one over the other; urges evaluators to develop evaluation theories and approaches that synthetically integrate stakeholder’s views and practices, thus acknowledging the dynamic nature of an intervention program in a community, with scientific principles and methods for enhancing the usefulness of evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Science vs Art of evaluation

A

An evaluator must master both the science evaluation (tools), but also the art (communication, etc.) to be competent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Four Stages of Program Lifecycle

A
  1. Program Planning
  2. Initial Implementation
  3. Mature Implementation
  4. Outcome Stage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Program Planning

A

First stage of Program Lifecycle: Developing a plan that will serve as a foundation for organizing and implementing a program at some future date

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Initial Implementation

A

2nd Stage of Program Lifecycle: When a program plan is put into action; the program is often highly fluid and unstable at this time. Here, stakeholders need timely feedback on implementation problems and their causes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Mature Implementation

A

3rd Stage of Program Lifecycle: Begins when implementation of the program has settled
into routine activities; rules and policies are well-established.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Outcome Stage

A

Final(4th) Stage of Program Lifecycle: Following program maturity, stakeholders inside and outside the program want to know whether the program is achieving its goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Evaluation Approach

A

Constitutes a systematic set of concrete procedures and principles that guide the design and conduct of an evaluation. It determines the evaluation’s focus and affects the research methods applied to collect and analyze data as well as interpretation of data. (ex: needs assessment, program theory / logic models, pilot-testing, commentary or advisory meeting, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Evaluation Strategy

A

The general path that the evaluator and stakeholders take or orientation they have in order to fulfill a given evaluation approach’s purpose. (ex: background information provision, development facilitation, troubleshooting, partnership, merit assessment, performance assessment, performance monitoring, development facilitation, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Descriptive Assumptions

A

Within the the action/change model framework, descriptive assumptions concern the causal processes underlining the social problem a program is trying to address. Assumptions about the causal processes through which an intervention or a treatment is supposed to work are crucial for any program, because its effectiveness depends on their truthfulness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Prescriptive Assumptions

A

Prescribe those components and activities that the program designers and other key stakeholders see as necessary to a program’s success; they direct the design of the intervention program by identifying what activities and components are necessary for the program. This is part of the action model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Components of the Change Model

A

3 Components: Goals & Outcomes, Determinants, and Intervention or Treatment.

  • Goals and outcomes: Goals reflect the desire to fulfill unmet needs; a program’s existence is justified through the meeting of its goals, which are usually articulate in very general language. Outcomes are concrete, measurable aspects of goals.
  • Determinants: the leverage mechanism or cause of a problem which will provide the basis of the treatment or intervention developed to meet a need. Once the program actives the determinant, its goals will soon be achieved.
  • intervention or treatment: any activity of a program that is aimed directly at changing a determinant; it is the agent of change within a program.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Components of the Action Model

A

6 components: intervention and service delivery protocols, implementing organizations, program implementers, associate organizations / community partners, ecological context, and target population.

24
Q

Define Action Model

A

An action model is a systematic plan for arranging staff, resources, settings, and support organizations in order to reach a target population and deliver intervention services

25
Q

Intervention Protocol

A

Component of Action Model: a curriculum or prospectus stating the exact nature, content, and activities of an intervention; details of orienting perspective and operating procedures.

26
Q

Service Delivery Protocol

A

Component of Action Model: refers to the particular steps to be taken in order to deliver the intervention in the field; it contains client-processing procedures, division of labor service delivery, communication channels.

27
Q

Implementing Organizations

A

Component of the Action Model: the organization responsible for allocating resources, coordinating activities, recruiting, training, and supervising implementers and other staff.

28
Q

Program Implementers

A

Component of the Action Model: organization or people responsible for delivering services to clients.

29
Q

Associate Organizations/Community Partners

A

programs often may benefit from, or
even require, cooperation or collaboration between their implementing organization and
other organizations. Without this relationship implementation of such programs may be hindered.

30
Q

Ecological Context

A

Component of Action Model: the portion of the environment that directly interacts with the program. Micro-level contextual support (comprises social, psychological, and material supports that clients need in order to allow their continued participation in intervention programs) vs. Macro-level support the community norms, cultures, and political and economic processes.

31
Q

Program Scope vs. Action Plan

A

Both are part of the program plan.

  • Program scope includes stakeholders’ reasons for selecting goals and target population; it states the scope of the intervention to be used and explains how it is
    expect to lead to achievement of the goal. Change model plus boundary questions which includes problem and target audience.
  • Action plan: details how the intervention will be implemented, given the goals and target population.
32
Q

Conceptual Framework of an Action Plan

A

When evaluators need to help stakeholders conceptualize all of the various actives within a meaningful scheme so that implementation can be successfully managed. Conceptualization of a generic action plan useful in developing any number of specific, situational action plans.

33
Q

Difference between intervention and service delivery protocol

A

Intervention protocol requires that the services to be provided by the program be specified in detail; it is a description of the content, curriculum, intensity, and duration of the intervention services or activities to be provided to the target group. Service delivery protocol is the procedures involved in and also the setting for, the delivery of services.

34
Q

Pilot Testing

A

Determines the field feasibility with a small-scale field trial of an action plan conducted rapidly to access and improve the implementation of a full-scale intervention.

35
Q

Formative research vs. Formative evaluation

A

Formative research provides background information to further stakeholders’ design of a program, whereas formative evaluation is a development-oriented evaluation application used to troubleshoot problems once a program is formally implemented.

36
Q

Four types of Formative Evaluation

A
  • On-site observation and checking: Evaluators themselves participate in a program, or else observe the implementation process, to identify major implementation problems (if any) and probe causes.
  • Focus Group meeting: Interactive strategy for gaining knowledge of the perceptions, experiences, and beliefs of a small group of people about a topic or experience with which they are familiar.
  • Intensive interviews
  • Comprehensive scanning: rapidly identifies major implementation problems and otherwise scans for opportunities to enhance a large program by using several data collection methods.
37
Q

Two reasons why Formative Evaluation should never be used to describe a program’s quality

A

Formative evaluation aims at providing a “quick fix” mandated by the needs of stakeholders when programatic problems surface. When conditions are so changeable, it is difficult to make meaningful value judgements. A stable pattern of implementation usually must happen before quality or merit can be judged. 2) To provide feedback quickly, formative evaluation often must apply research methods elastically, altering certain “prefabricated” methods to suit the circumstances. The elasticity in application of research methods means that defending the methodology would be difficult if results were used to rate program’s merit.

38
Q

Congruency

A

The alignment between stakeholders’ intentions for the program (their program plan) and program implementation; congruency between program plan and program implementation is widely understood to signify a high-quality implementation.

39
Q

Reinvention

A

Change is a necessary part of the adoption of any program and its occurrence is absolutely necessary to preserve program effectiveness. Thus, discrepancy between a program plan and the observable implantation of the plan is desirable and should be encouraged.

40
Q

Four fundamental issues of program evaluation

A

Theory: how and why eval is done, what is its purpose.
Method: tools used (ex: survey, interviews, quantitative measurement)
Practice: program evaluation as something that occurs in the real world and is impacted by politics; has stakeholders, clients; and a context in which it operates.
Profession: Distinguishable group of licensed professionals, associations, etc. These things imply that there is an accepted, standardized body of knowledge, however, program evaluation has lots of disagreement about theories and how it should be practiced.

41
Q

Branches of Evaluation Tree

A

3 Branches: Use, methods, and valuing.

Use: You need to go through process from beginning to end to make sure that results of evaluation actually get used; work with clients / stakeholders to answer questions so it is more likely that they will be used.
- Methods: Collecting and analyzing data. Initially, program evaluation was defined by quantitative, experimental methodology of the physical sciences; this was summative evaluation and anything else meant not being a true evaluator.
Valuing: Scientific method, when applied should show causation or causal linkage. Scientific method gives you a binary answer: did this work or not work, but doesn’t say if it’s good or bad. The evaluator much decide what is bad and what is good.

42
Q

Six headings for logic models

A

Inputs, Activities, Participation, Short term outcomes, medium outcomes, long-term outcomes.

Inputs: Program investments (staff, volunteers, time, money, research base, materials, equipment, technology, partners)

Activities: What is done (activities–train, teach, deliver services, develop products and resources, network with others, build partnerships, assess, facilitate, work with the media;

participation–participants, clients, customers, agencies, decision makers, policy makers)

Outcomes: Short (learning; changes in awareness, knowledge, attitudes, skills, opinion, aspirations, motivation, behavioral intent) Medium (action; changes in behavior, decision-making, policies, social action) Long-term (conditions; changes in conditions, social (well-being), health, economic, civic, environmental)

43
Q

Difference between traditional evaluation and responsive evaluation

A

whereas traditional evaluation draws legitimacy from scientific rigor, responsive evaluation draws legitimacy from endorsements by a majority of important stakeholders.
Traditional evaluation: characterized by its emphasis on scientific methods. Reliability and validity of the collected data are key, while the main criterion for a quality evaluation is methodological rigor; requires the evaluator to be objective and neutral and to be outcome focused.
Responsive evaluation: an alternative to traditional evaluation that is less objective and more tailored to the needs of those running the program; “sacrifices some precision in measurement, hopefully to increase the usefulness of the findings to persons in and around the program”.

44
Q

How evaluator’s roles and responsibilities differ at 4 different program life cycle stages

A

During the program development stage, evaluators need to be flexible and nimble in quickly designing and collecting data that will be immediately useful to the next design stage activity.
Impact assessment phase requires carefully developed evaluation design with detailed planning for maintaining its integrity, and for sustaining the intended activities of both program staff. This is most often done external to organization by professionals that with strong research skills, along with research support staff experienced in the many detailed activities being tested.
Delivery phase: evaluators work continuously to produce the needed monitoring data, or often, training and coaching local program staff to collect and analyze their own feedback.

45
Q

Basic Monitoring

A

the process of developing and analyzing data to count and/or identify specific program activities and operations. Concerned with answering simple questions about program activities (like who, what, where).

46
Q

Process Evaluation

A

involves developing and analyzing data to assess program processes and procedures, especially determining the connections between various program activities.

47
Q

Outcome Evaluation

A

involves developing and analyzing data to assess program impact and effectiveness. Can’t be done without monitoring and process evaluation beforehand.

48
Q

Difference between program logic and program theory

A

Program logic is used to identify and describe the way in which a program fits together, usually in a simple sequence of inputs, activities, and outputs, and outcomes. Program theory goes a step further and attempts to build an explanatory account of how the program works, with whom, and under what circumstances. Thus, program theory might be seen as an elaborated program logic model, where the emphasis is on causal explanation using the idea of mechanisms that are at work.

49
Q

Mechanism

A

underlying entities, processes, or structures which operate in particular contexts to generate outcomes of interest. Mechanisms often unobservable or “hidden”, sensitive to variations in context, and generate outcomes. It is possible to make a plausible case for the existence of underlying mechanisms by referring to the observable effects which can only be explained as products of underlying mechanisms.

50
Q

Definition of a Need

A

What is (current state, situation, or condition) and
What should be (preferred or desired state, condition, or situation)
The gap between what is and what should be must be measurable, which means that the two conditions must also be measurable.

51
Q

Key questions guiding all phases of a needs assessment

A

What is the current situation for clientele, employees, and administrators?
What should the situation be?
If there is a gap between what is and what be, then:
What are the consequences for not changing the current situation?
What is causing the gap?
What are some possible solution strategies to reduce the gap?

52
Q

Differences among a problem, need, and underlying cause

A

Problems are systemic issues that requires more than one lens to solve.
Needs refers to some aspect of a problem that can be immediately addressed, while that may not be possible for other aspects of the program. Needs are systemic, evidence-based phenomenon linked to all or part of the program. They suggest that identification of the need-related aspects of a problem helps specify the conditions and actions necessary to address the focus of the problem.
Underlying causes partially reflect barriers and other resistant factors that prevent needs from being met. Therefore, identified needs are likely to be met only if the underlying causes are understood and confronted.
Example: Problem–unemployment; need–needing a job; underlying causes–lacks training in a skilled job area and lacks hope and motivation.

53
Q

Difference between program theory and implementation theory

A

Program theory illuminates the set of cause-and-effect relationships that provide the rationale for the nature of the treatment. Program theory should be used by evaluators to guide the development of measuring instruments to assess what program was delivered.
Implementation process theory discusses variables governing the delivery mechanism itself. It helps illuminate why a program is or is not being delivered accurately and examines potential solutions for increasing the extent of program delivery.

54
Q

Major hypothesis and extent of implementation and organizational environment

A

A major hypothesis is that the extent of implementation is likely to be higher when there is a greater degree of congruence between the nature of the program or innovation being attempted and the characteristics of the organizational environments involved.

55
Q

Five ways to measure fidelity of implementation

A

(1) Adherence to the program; (2) dose (the amount of the program delivered); (3) quality of the program delivery; (4) participant responsiveness and; (5) program differentiation