ITED 8600 - Program Evalutation Flashcards
CH 1 = How does evaluation serve society? Why is it important?
- Society and the problems it confronts become increasingly complex (p. 3)
- Make intelligent choices … Good information about the relative effectiveness (p. 4)
CH 1 = What is the difference between formal and informal evaluation?
Research > Conclusions
Evaluation > Judgements (internal validity and external validity)
Evaluation (Patton, 2000, p. 7) = appraise, analyze, assess, critique, examine, grade, inspect, judge, rate, rank, review, score, study, test
Identification, clarification, and application of defensible criteria to determine an evaluation objects value (worth or merit) in relation to those criteria. (p. 5)
Judged by their:
- accuracy (evaluation) (p. 7)
- utility (service of ____)
- feasibility (extent … realistic, diplomatic)
- propriety (evaluation … Legal and Ethical)
- preparation (research = depth in a single field) / (evaluator = respond to needs of clients and stakeholders … interdisciplinary and broad education)
FORMAL VS. INFORMAL = (p. 8)
Informal ~ faulty or wise / absence of breadth and depth / does not occur in a vacuum
Formal ~ methodical / presence of depth and breadth
CH 1 = What are some purposes for evaluation? What roles can an evaluator play? Give some examples from your experience with evaluation?
BASIC = render judgements about the value of whatever is being evaluated / determine the worth or value of whatever is being evaluated (p. 10 - 11)
(Mark, Henry, and Julnes (1999)) = 4 purposes:
- assess merit and worth
- oversight and compliance
- program and organizational improvement
- knowledge development
ROLE OF EVALUATOR =
- trusted person
- bring organizational learning / installing a learning environment
- stimulates dialogue
- undertake many activities
USES AND OBJECTS OF EVALUATION: (p. 14-15)
- the “evaluated” and the “evaluatee”
CH 1 = What are the major differences between formative and summative evaluation?
FORMATIVE: provide information for program development / merit or worth for part of a program (PROCESS EVALUATION)
SUMMATIVE: information to serve decisions … program adoption, expansion, continuation … programs overall worth or merit (OUTCOME EVALUATION)
AUDIENCES:
Formative = people in a position to make changes … teacher, manager / use formative evaluation EARLY
Summative = policy makers / administrators (budgetary and legislative decisions)
DIFFERENCES: (p. 20) ~ comparison chart
CH 1 = What is an example of an issue an evaluator might address in a needs assessment, a process evaluation, and an outcome evaluation?
NEEDS ASSESSMENT: need exist / recommendations
PROCESS MONITORING: how program is delivered / qualifications / delivery environment
OUTCOME: describe, explore, determine changes
CHART COMPARISON: (p. 22 ~ FIGURE 1.3)
CH 1 = Under what circumstances might an external evaluator be preferable to an internal evaluator?
Program employees vs. outsiders: DIFFERENCE
There are ADVANTAGES and DISADVANTAGES to both models.
Too close … Biases … Too far away (distance) (p. 23)
Power, autonomy, protection ~ evaluation is hindered.
Combination of roles: CHART (p. 24) … Overlap
CH 2 = Who did the early stages of evaluation influence practice today?
ORIGINS AND CURRENT TRENDS IN MODERN PROGRAM EVALUATION
30 / transdiscipline = law, education, accounting, sociology, political science, psychology
I. 30 / The History and Influence of Evaluation in Society
A. Early Forms of Formal Evaluation
B. Program Evaluation: 1800-1940
1. 32 / accreditation = educational evaluation; statewide testing (standardized / norm-referenced); achievement tests; personality and interest profiles (COMMERCIAL enterprise)
2. 33 / criterion-referenced testing = (alternative) to NRT (below);
3. 33 / norm-referenced testing =
C. Program Evaluation: 1940 - 1964
1. Moving towards consolidation of earlier movements
2. Theoretical (as opposed to research) based evaluation did not exist ~ began to glean feedback from social, behavioral, and educational research
D. The Emergence of Modern Program Evaluation: 1964 - 1972
1. Growth of evaluation extended to the private and government sector ~ needd help: Congress; government waste, abuse, mismanagement (PUBLIC concern = “Great Society” programs)
E. The Elementary and Secondary Education Act
1. ESEA (1965)
2. Micromanagement on the Congressional level = accountability (!); standardized tests to demonstrate student learning and linking outcomes to learning objectives.
F. Growth of Evaluation in Other Areas
1. Evaluation of Title 1 (one program)
2. Head Start / “Sesame Street” = program evaluation; program developers; characterized by innovation
3. Evaluators as social experimenters
G. Graduate Programs in Evaluation Emerge
1. Need for specialists in the field; Congress and universities respond; public and nonprofit sector jobs; social science graduates … interest in evaluation and policy analysis (Shadish, et al., 1991)
G. Evaluation Becomes a Profession: 1973 - 1984
1. Professional organizations: The American Educational Research Association’s Division H; Evaluation Research Society and Evaluation Network; (Congress) Joint Committee on Standards for Education Evaluation
1. Decline of evaluation at federal level = diversification of evaluation (settings and approaches)
H. 1985 - the Present
1. 43 / Table 2.1 / Stages in the Development of Evaluation (PENULTIMATE = PAGE 4)
II. Recent Trends Influencing Program Evaluation
A. 44 / Twelve emerging trends influencing the future:
1. Increase priority / legitimacy of internal evaluation
2. Expanded use of qualitative methods
3. Strong shift toward using multiple and diverse methods (qualitative and quantitative) to address evaluation questions: fully / appropriately
4. Expansion of theory-based (or theory-driven) evaluation
5. Increased concern / ethical issues = conducting program evaluation
6. Program evaluation in foundations and other agencies in the nonprofit sector
7. Education involvement of stakeholders in the conduct of evaluation, often to empower stakeholders to conduct their own evaluations and / or to bring a new sense of learning to the organization
8. Discussion of the appropriate role of evaluators in advocacy, often expressed as evaluating for less powerful stakeholders
9. Advances in technology available to evaluators, and communication and ethical issues such advances will raise
10. Performance measurement in the federal government and nonprofit organizations and standards-based assessment in education as means for tracking performance
11. Growth of evaluation internationally
B. 44 / The Role of the Evaluator in Advocacy
1. Evaluators perceived as “neutral” or “value free”
2. Everyone has “baggage”
3. No one is truly a blank slate ~ the evaluator does bring a new and different perspective to the evaluation
4. Greene (1997) acknowledges that “the very notion of evaluation as advocacy invokes shudders of distaste and horror among most members of today’s evaluation community, theorists and practitioners alike” (pg. 26)
5. House and Howe argue that evaluators should stimulate democratic dialogue and, hence, empower stakeholders who are often left out of the discussions regarding evaluation and program effectiveness.
6. Greene herself argues that evaluation should be “a force for democratizing public conversations about important public issues” (p. 28) … evaluators should recognize their role as advocates, to be explicit about those values, and to acknowledge the implications of those values.
7. 45 / Chelimsky (1998) ~ director of the Program evaluation and Methodology Division of the General Accounting Office (GAO) … “Policy makers in Congress expect evaluators to play precisely such a role and provide precisely this kind of information” (p. 39)
8. Evaluation = evaluators are making choices; multiple roles … being objective by various stakeholders … “advocacy” is a loaded term / theories and philosophies about evaluation ~ work in inform users about own goals in conducting and evaluation how these goals coincide (FULL PICTURE)
C. 46 / Evaluators’ Use of Technological Advances and Resulting Communication and Ethical Issues
D. 47 / Performance Measurement and Standards-Based Education
E. 49 / Growth of Evaluation Internationally
F. 51 / Major Concepts and Theories
1. Commissions report: specific problems, objective tests, accreditation. Depression = social scientists work for the government.
2. Russians’ Sputnik 1 = U.S.A. teach math and science to American students. National Defense Education Act (NDEA) of 1958.
3. Great Society Legislation (President Johnson) = first major phase of growth in evaluation
4. Growth = first efforts to train and educate professionals.
5. Profession fully established (creation of professional organizations)
6. Expansion = more qualitative approaches and discussions (use by many diverse groups)
7. Spread to many other countries / different approaches to evaluation (performance monitoring and standards) ~ relationships with stakeholders
CH 2 = What major political events occurred in the late 1950s and early 1960s that greatly accelerated the growth of evaluation thought?
ALTERNATIVE VIEWS OF EVALUATION
I. 58 / Diverse Conception of Program Evaluation
A. Evaluation as essentially professional judgement
B. Comparison between student performance indicators and objectives = standards established as a benchmark (evaluator constructed)
C. Using decision-oriented approach = relative disadvantages / advantages of each decision (judge worth) and evaluation is shared
D. Evaluation of evaluators ~ good for: students, parents, commnity, group … judgement of worth by evaluator
II. 59 / Origins of Alternative Views of Evaluation
Niemi (1996): 1) experimentation, (2) measurement, (3) systems analysis, (4) interpretative approaches
A. 60 / Philosophical and Ideological Differences
1) Objectivist and Subjectivist Epistemology
a. epistemology = different philosophies of knowing
b. objectivism = evaluation be “scientifically objective” … data as reproducible and verifiable by other reasonable and competent persons using the same techniques.
c. subjectivism = “an appeal to experience rather than to scientific method. Knowledge is conceived as being largely tacit rather than explicit’ (House, 1980, p. 252)
d. logical positivism = almost universally rejected; should be eliminated; rejected by public evaluation; infallible (61)
2) Utilitarian versus Intuitionist-Pluralist Evaluation
a. utilitarian = determine value by assessing overall impact of a program on those affected.
b. intuitionist-pluralist evaluation = value depends on the impact of the program on each individual. (more subjective)
3) The Impact of Philosophical Differences
a. polarization (“either-or” dichotomies)
B. 63 / Methodological Backgrounds and Preferences
qualitative = senses / intuition / emotion
quantitative = data / measurement / numerical
1) Quantitative and Qualitative Evaluation
a. 64 / past decade ~ increased acceptability and use of qualitiative data
2) Disciplinary Boundaries and Evaluation Methodology
C. 65 / Different Metaphors of Evaluation
1) Already underlie and influence much of our thinking ~ much of thinking is largely metaphorical
2) Sports contests or games (leading metaphors of tagets and goals).
D. 66 / Responding to Different Needs
1) Many programs ~ address different needs (learn to identify)
E. 66 / Practical Considerations
1) Educators disagree about whether the intent of evaluation is to render a value judgement.
2) Evaluators differ in their general view of the political roles of evaluation.
3) Evaluators are influenced by their prior experience.
4) Evaluators differ in their views about who should conduct the evaluation and the nature of the expertise that the evaluator must possess.
5) Evaluators differ even on their perception of whether it is desirable to have a wide variety of approaches to evaluation.
III. 68 / A Classification Schema for Evaluation Approaches
A. Objectives-oriented approaches = objectives / extent met
B. Management-oriented approaches = ID and meet in formational needs (manegerial decisions)
C. Consumer-oriented approaches = products used by consumers
D. Expertise-oriented approaches = direct application of professional expertise
E. Parcipant-oriented approaches = stakeholders are central (values, criteria, needs, data, conclusions)
IV. 69 / Major Concepts and Theories
CH 2 = Are the current trends in performance measurement and standards-based education similar to earlier stages of evaluation? If so, how?
x
CH 2 = What major political events occurred in the late 1950s and early 1960s that greatly accelerated the growth of evaluation thought?
ALTERNATIVE VIEWS OF EVALUATION
I. 58 / Diverse Conception of Program Evaluation
A. Evaluation as essentially professional judgement
B. Comparison between student performance indicators and objectives = standards established as a benchmark (evaluator constructed)
C. Using decision-oriented approach = relative disadvantages / advantages of each decision (judge worth) and evaluation is shared
D. Evaluation of evaluators ~ good for: students, parents, commnity, group … judgement of worth by evaluator
II. 59 / Origins of Alternative Views of Evaluation
Niemi (1996): 1) experimentation, (2) measurement, (3) systems analysis, (4) interpretative approaches
A. 60 / Philosophical and Ideological Differences
1) Objectivist and Subjectivist Epistemology
a. epistemology = different philosophies of knowing
b. objectivism = evaluation be “scientifically objective” … data as reproducible and verifiable by other reasonable and competent persons using the same techniques.
c. subjectivism = “an appeal to experience rather than to scientific method. Knowledge is conceived as being largely tacit rather than explicit’ (House, 1980, p. 252)
d. logical positivism = almost universally rejected; should be eliminated; rejected by public evaluation; infallible (61)
2) Utilitarian versus Intuitionist-Pluralist Evaluation
a. utilitarian = determine value by assessing overall impact of a program on those affected.
b. intuitionist-pluralist evaluation = value depends on the impact of the program on each individual. (more subjective)
3) The Impact of Philosophical Differences
a. polarization (“either-or” dichotomies)
B. 63 / Methodological Backgrounds and Preferences
qualitative = senses / intuition / emotion
quantitative = data / measurement / numerical
1) Quantitative and Qualitative Evaluation
a. 64 / past decade ~ increased acceptability and use of qualitiative data
2) Disciplinary Boundaries and Evaluation Methodology
C. 65 / Different Metaphors of Evaluation
1) Already underlie and influence much of our thinking ~ much of thinking is largely metaphorical
2) Sports contests or games (leading metaphors of tagets and goals).
D. 66 / Responding to Different Needs
1) Many programs ~ address different needs (learn to identify)
E. 66 / Practical Considerations
1) Educators disagree about whether the intent of evaluation is to render a value judgement.
2) Evaluators differ in their general view of the political roles of evaluation.
3) Evaluators are influenced by their prior experience.
4) Evaluators differ in their views about who should conduct the evaluation and the nature of the expertise that the evaluator must possess.
5) Evaluators differ even on their perception of whether it is desirable to have a wide variety of approaches to evaluation.
III. 68 / A Classification Schema for Evaluation Approaches
A. Objectives-oriented approaches = objectives / extent met
B. Management-oriented approaches = ID and meet in formational needs (manegerial decisions)
C. Consumer-oriented approaches = products used by consumers
D. Expertise-oriented approaches = direct application of professional expertise
E. Participant-oriented approaches = stakeholders are central (values, criteria, needs, data, conclusions) (FIGURE 3.1, PAGE 68, PENULTIMATE PAGE 5)
IV. 69 / Major Concepts and Theories
A. Program Evaluation takes many forms depending on how one views evaluation, which in turn influences the types of evaluation activities conducted.
B. The methodologies and models an evaluator might employ in an evaluation depend in large part on the metaphors the evaluator used to understand and the program under evaluation and the needs of the program stakeholders.
CH 2 = How has advocacy emerged as a controversial issue in evaluation?
x
P2 Intro = Alternative Approaches to Program Evaluation
- Five general approaches to evaluation (one per chapter)
- 53 / Each chapter: 1) summarize previous thinking and writing; 2) discuss how the approach has been used; 3) examine strengths and weaknesses
54 / stakeholders = various individuals and groups who have direct interest in and may be affected by the program being evaluated or the evaluation’s results ~ 1) ID concerns and issues to be addressed in evaluating the program; 2) selecting the criteria that will be used in judging its value *
(Reineke, 1991) “evaluators need to identify stakeholders for an evaluation and involve them early, actively, and continuously”
54 / program = 1) “standing arrangement that provides for a … service” (Cronbach, et al., 1980, p. 14); 2) complex of people, organization, management, and resources that collectively make up a continuing endeavor to reach some particular education, social, or commercial goal; 3) an ongoing, planned intervention that seeks to achieve some particular outcome(s), in response to some perceived education, social , or comercial problem
CH 3 = Why are their so many different approaches to evaluation?
x
CH 3 = How would objectivists and subjectivists differ in their approach to evaluation?
x
CH 3 = Why is evaluation theory, as reflected in different approaches to evaluation, important to learn?
x