CHAPTER 12 prt. 2 Flashcards

1
Q

is a systematic, formal, and scientific collection, organization, and interpretation of data to determine the worth ofany educational phenomenon.

A

educational evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

three basic requirements of ensuring quality evaluation instruments are

A

validity, reliability, and practicality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

refers to the degree to which correct inferences can be made based on the information obtained from the given data.

This reflects the trueness and
accuracy of the evaluation data gathered.

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validity of any data can be established through four basic means, namely:

A

construct
content
concurrent and
predictive validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

validity is the broadest type because it deals with the degree to which a certain instrument describes the theoretically accepted traits or characteristics of a certain concept under study.

A

Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

________ validity, on the other hand, refers to the degree by which the instrument contains an adequate number of items to represent each of the constructs being studied.

A

Content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

________ validity should also reflect the relative weights of each construct being measured. For example, if clinical competence can be broken down into five constructs and if these are considered to be equally important, then to establish the content validity of an evaluation tool, the items in this tool should equally represent the said constructs.

A

Content validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The _________ validity of an instrument is established by collecting data ot seeif the results obtained with the instrument agree with the results from other instruments administered at
approximately the same time to measure the same thing (Henerson, Morris, and Fitz-Gibbon
1978).

This si determined by computing the correlation coefficient, r, between data obtained from at least two different instruments.

A

concurrent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The _________ validity of an instrument is established by demonstrating that the results from the instrument can be used to predict some future behavior (Henerson, Morris, and Fitz-Gibbon 1978).

A

predictive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

the data collection tool is able to generate consistent performance among the respondents being evaluated.

A

Reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

a _______ evaluation tool yieldsconsistent scores for each respondent from one administration of an instrument to another and from one set of items to another (Frankel
and Wallen 1993).

A

reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Reliability could be established using several ways.

A

test-retest method

equivalentforms method

inter rater reliability test

practicality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

two different but equivalent (also called alternate or parallel) forms of an instrument areadministered to the same group ofrespondents during the same time period.

The two parallel instruments should be ideally based on the sameblueprint.

Again, the r test can be run to establish the reliability of the two instruments.

A

equivalent forms method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The ______________ test, on the other hand, establishes consistency based on different raters who examined a particular examinee or respondent. For example, four consultants in a summative oral examination could examine aresident physician.

The scores given by the four raters can be tested for reliability and the higher the r obtained, the more consistent the raters’ ratings are.

A

inter rater reliability test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

This requisite means the test should be efficient, economical, and easy to score and interpret. It is a common mistake
among evaluators to construct an unusually long questionnaire such that respondents lose interest in completing the said tool. Such a long tool will not be practical in terms of resources that will be required; it will also be quite tiresome for respondents and raters.

A

Practicality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. The diplomates’ examination in obstetrics-gynecology consists of twelve areas in the said field. The written examination is composed of 200 multiple-choice questions. If al the twelve areas are equally important, how many items should each of them have in the said examination? What type of validity is ensured in this case?
A
17
Q
  1. Granting that one of theexaminers, aleading personality ni gynecologic oncology insists that her part be more than the items in question #1, what requisite is violated?
A
18
Q
  1. If a given academic program’s success is asserted by heavily biased clients during interviews and other participants in the survey questionnaire, what requisite/s is/are ensured/violated?
A
19
Q

refers to the capsule presentation of the entire evaluation report.

A

executive summary

20
Q

This si usually from three to fifteen pages long and should give readers an overview of the salient points of the evaluation study, from the background up to the recommendations.

A

executive summary

21
Q

To facilitate the understanding of the final report, and in cognizance of the extremely full schedules of most sponsors and stakeholders that may not have all the time to read a lengthy report, another one- page executive abstract can be presented prior to this executive summary.

A

executive summary

22
Q

The _______ is usually not more than 300 words, including the names of evaluators, title of the study, and auspices where the evaluation project was conducted.

A

abstract

23
Q

__________ part should contain a rationale on the evaluability of the program and a discussion on the purposes, audiences, scope, and limitations of the evaluation study.

A

introductory

24
Q

In this section, the evaluators have to discuss if the project is for a formative or summative purpose, and whether an internal or external evaluator shall conduct it. ‘

A

Introduction

25
Q

Can the results of the evaluation influence decisions about the program?

The first question refers to the ________ criterion of evaluation. This implies chat the evaluation is designed to answer specific questions raised by those in charge of

A

utilization

26
Q

Can the evaluation be done in time to be useful?

_______, as the second criterion, deals with when and how long the evaluation would be done to affect
the next decisions about the program.

A

Timeliness

27
Q

Is the program significantenough to merit evaluation?

The last criterion deals with ________, referring to the relevance and necessity of the program, especially fi it requires a considerable amount of
resources.

A

significance

28
Q

Parts two and three of the basic steps in conducting an evaluation study can be simplified by making an ____________. Atypical _______ enumerates the different evaluative questions in the study, a corresponding discussion of data required, who can be the appropriate sources of
such information, what instruments match the objectives and can generate the data needed, and how such data would be analyzed and interpreted.

A

evaluation matrix

29
Q

Al chapters contain sets of objectives that you, as readers, should be able to reach after you have finished reading them. The underlying principle in this format si to guide you on what you can accomplish and later on, appraise yourself accordingly
on how far you have reached the targets.

A

Objectives-oriented evaluation model

30
Q

The model makes use of learning objectives as the standards ni determining the success or failure of any educational phenomenon, program, or experience.

The pioneer evaluators also asserted that such objectives have to be formulated in strictly behavioral terms for easier evaluation

A

Objectives-oriented evaluation model

31
Q

Educators and funding institutions alike both appreciate the____________________ model m because the standards in determining the worth of a program are already clearly built-in in the program. However, critics also wrote that this model lacks the essential elements ofa real evaluation since it is generally focused only on the attainment of its objectives on student achievement.

A

objectives-oriented evaluation model

32
Q

________________________model is especially designed for administrators, hence, the title. This model rests on the rationale that evaluative information si an essential part of good decision making, and that the evaluator can best serve education by serving administrators, policy makers, school boards, teachers, and others in the school system (Worthen and Sanders 1987).

A

management-oriented evaluation

33
Q

In this particular framework ( Management-oriented evaluation model ), only the most popular and comprehensive model is discussed in detail:

A

the context, input,
process, and product (CIPP, pronounced as “sip”)

34
Q

Planning decisions: To identify the institutional context, the target population, and the opportunities for addressing needs, diagnose problems underlying the needs, and to judge whether the proposed objectives are sufficient for those needs

A

Context

35
Q

Structuring decisions: To identify and assess system capabilities, alternative program strategies, procedural designs for implementing strategies, budgets, and schedules

A

Input

36
Q

Implementing decisions: To identify or predict, inprocess, defects in the procedural design or its implementation, provide information for the preprogrammed decisions, and record and judge procedural events and activities

A

Process

37
Q

Recycling decisions: To collect descriptions and judgments of outcomes, relate them to obicctives, and determine the program’s worth andmerit

A

Product