UNIT 4 Flashcards

1
Q

As teachers become more familiar with data-driven instruction, they are making decisions about what and how they teach based on the information gathered from their students.

A

o (1) Teachers first find out what their students know and what do they not know.

o (2) Teachers determine how to bridge that gap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

give value judgment to assessment through the qualitative measure of the prevailing situation

A. Measurement
B. Assessment
C. Evaluation

A

C. Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

determines desired dimensions of defined
characteristics

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

measures the performance of an individual from
a known objective

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

o involves quantitative amount of measurement
using standard instrument
o quantity is mentioned but not making a value
judgment on the performance of the student

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

o form of feedback on student’s learning
o assist a teacher in determining what, how much
and how well the students are learning

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

o analyzes the result obtained from measurement
based on predetermined standards

A. Measurement
B. Assessment
C. Evaluation

A

C. Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

raw score, percentile rank, and standard score

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

o measures the educational achievements of the
students

A. Measurement
B. Assessment
C. Evaluation

A

C. Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

useful arm in improving teaching and learning

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

quantitative instruments are used: tests,
aptitude tests, inventories, questionnaires, etc.

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the 3 essential characteristics in measurement

A

Reliability
Validity
Objectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

TRUE OR FALSE

All assessments are test but not all tests are assessments

A

FALSE

All tests are assessments but not all assessments are test

Assessments can be through verbal and skills testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

TRUE OR FALSE

test is a special form of assessment and can
be given at the end of lesson or unit

A

TRUE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Content-oriented:

Objective-oriented:

A

Content-oriented: Measurement

Objective-oriented: Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

It is an end in itself:

It is a means, and not an end in itself:

A

It is an end in itself: Evaluation

It is a means, and not an end in itself: Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Deduce inferences from the evidence:

To gather evidence:

A

Deduce inferences from the evidence: Evaluation

To gather evidence: Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

TRUE OR FALSE

Measurement is an essential part of education

A

FALSE

Measurement is Not be an essential part of education

EVALUATION is Integrated or necessary part of education

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Acquaints with a situation:

Acquaints about the entire situation:

A

Acquaints with a situation: Measurement

Acquaints about the entire situation: Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

TRUE OR FALSE

Evaluation can be conducted any time

A

FALSE

MEasurement can be conducted any time

EVALUATION is a continuous process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

emphasis is upon single aspect of

subject matter achievement or specific skills and abilities.

A

Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

emphasis is upon broad personality changes and major objectives of an educational program.

A

Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

o systematic data-based test measures what and how will the students learn
o in determining student proficiency and mastery of the content and use for comparison against a certain standard

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

o spontaneous and flexible form of assessment that can easily be incorporated in day-to-day classroom activities and measure student’s performance and progress.

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Makes use of Rubrics A. Formal Assessment B. Informal Assessment
B. Informal Assessment
26
o norm-referenced by comparing different scores within the same group A. Formal Assessment B. Informal Assessment
A. Formal Assessment
27
o criterion-referenced is the process of evaluating and grading the learning of students against a pre-specified qualities or criteria without reference to the achievement of others A. Formal Assessment B. Informal Assessment
B. Informal Assessment
28
more likely to occur in a classroom learning environment that help teachers acquire information in a continuing and in formal bases. A. Formal Assessment B. Informal Assessment
A. Formal Assessment
29
can take place in any student-teacher interaction potential to occur at any time and can involve whole class, small group or one on one interaction A. Formal Assessment B. Informal Assessment
B. Informal Assessment
30
Day-to-day activities Qualitative A. Formal Assessment B. Informal Assessment
B. Informal Assessment
31
beyond the classroom environment A. Formal Assessment B. Informal Assessment
B. Informal Assessment
32
Exams, diagnostics tests, achievement tests, aptitude tests, intelligence tests, quizzes A. Formal Assessment B. Informal Assessment
A. Formal Assessment
33
Checklist, observation, portfolio, rating scale, records, interviews, journal writing A. Formal Assessment B. Informal Assessment
B. Informal Assessment
34
Data-based test knowledge testing Systematic and structured A. Formal Assessment B. Informal Assessment
A. Formal Assessment
35
Mathematically computed and summarized A. Formal Assessment B. Informal Assessment
A. Formal Assessment
36
Progress of every student by using actual works A. Formal Assessment B. Informal Assessment
B. Informal Assessment
37
Used for comparison against a certain standard A. Formal Assessment B. Informal Assessment
A. Formal Assessment
38
Determine whether learning is taking place A. Formative Assessment B. Summative Assessment
A. Formative Assessment
39
Provide information to the students, parents, and administrators on the level of accomplishment attained A. Formative Assessment B. Summative Assessment
B. Summative Assessment
40
Occur at the end of the instruction Primarily retrospective A. Formative Assessment B. Summative Assessment
B. Summative Assessment
41
Provide feedback on how things are going A. Formative Assessment B. Summative Assessment
A. Formative Assessment
42
Concerned with purposes, progress, and outcomes of the teaching-learning process A. Formative Assessment B. Summative Assessment
B. Summative Assessment
43
Conducted during teaching or instruction Primarily prospective A. Formative Assessment B. Summative Assessment
A. Formative Assessment
44
Determine if learning is sufficiently complete Determine how well things went A. Formative Assessment B. Summative Assessment
B. Summative Assessment
45
Observation, oral questioning, assignments, quizzes, discussions, reflection, research proposal, peer or self-assessment A. Formative Assessment B. Summative Assessment
A. Formative Assessment
46
Help students perform well at the end of the program A. Formative Assessment B. Summative Assessment
A. Formative Assessment
47
Evaluate the effectiveness and improvement of teaching A. Formative Assessment B. Summative Assessment
A. Formative Assessment
48
Unit test, final examinations, comprehensive projects, research paper, presentations, project, portfolio A. Formative Assessment B. Summative Assessment
B. Summative Assessment
49
Gathering of detailed information and narrow in the scope of the content A. Formative Assessment B. Summative Assessment
A. Formative Assessment
50
Gathering information is less detailed but broader in the scope of content A. Formative Assessment B. Summative Assessment
B. Summative Assessment
51
○ Informing instruction ○ Designed to quickly inform instruction by providing specific and immediate feedback through daily ongoing instructional strategies that are student and classroom-centered A. Formative Assessment B. Summative Assessment
A. Formative Assessment
52
○ Summary of learning ○ Provides a summary of student learning; this assessment evaluates student learning, knowledge, proficiency, or success at the conclusion of an instructional period A. Formative Assessment B. Summative Assessment
B. Summative Assessment
53
○ Gives a final measure of value, effect, or quality A. Formative Assessment B. Summative Assessment
B. Summative Assessment
54
○ Helps students identify their strengths and weaknesses so they can perform well at the end of the program A. Formative Assessment B. Summative Assessment
A. Formative Assessment
55
○ Often completed at the end of summation of instruction, intervention, or program activities A. Formative Assessment B. Summative Assessment
B. Summative Assessment
56
○ Primarily prospective: shapes direction and feeds the process of ongoing adjustment and improvement ○ Occurs before or during instruction, the implementation of intervention, or a program A. Formative Assessment B. Summative Assessment
A. Formative Assessment
57
Interpret scores in terms of absolute standard A. Norm-reference B. Criterion-reference
B. Criterion-reference
58
Emphasizes thinking and application of knowledge Content-centered Assess higher-level thinking and writing skills A. Norm-reference B. Criterion-reference
B. Criterion-reference
59
Relative ranking of students A. Norm-reference B. Criterion-reference
A. Norm-reference
60
Percentile rank, normal curve NSAT, College Entrance Examination, National Achievement Test, IQ Test, Cognitive Ability test A. Norm-reference B. Criterion-reference
A. Norm-reference
61
Measure how well students have mastered a particular body of knowledge A student competes against himself or herself A. Norm-reference B. Criterion-reference
B. Criterion-reference
62
Evaluate the effectiveness of the teaching program and student’s preparedness A. Norm-reference B. Criterion-reference
A. Norm-reference
63
Focus too heavily on memorization and routine procedures Examinee-centered Highlight achievement differences A. Norm-reference B. Criterion-reference
A. Norm-reference
64
Monitor students’ performance in their day-to-day activities Setting performance standards A. Norm-reference B. Criterion-reference
B. Criterion-reference
65
Test item analysis Domain-referenced tests, competency tests, basic skills tests, mastery tests, performance or assessments, objective-referenced tests, authentic assessments, standards-based tests A. Norm-reference B. Criterion-reference
B. Criterion-reference
66
Students compete against each other A. Norm-reference B. Criterion-reference
A. Norm-reference
67
``` ○ Designed to compare and ranked takers in relation to one another ○ Determines students’ placement in a normal distribution curve ``` A. Norm-reference B. Criterion-reference
A. Norm-reference
68
○ Measures student’s performance based on mastery of a specific set of skills ○ Measures what the student know and doesn’t know at a time of assessment A. Norm-reference B. Criterion-reference
B. Criterion-reference
69
○ They don't have test problem solving, decision making, judgement or social skills ○ Based on judgements about examinees ○ Judges categories examinees according to performance level A. Norm-reference B. Criterion-reference
A. Norm-reference
70
Students performance is not compared to other student’s performance in the same assessment A. Norm-reference B. Criterion-reference
B. Criterion-reference
71
○ Evaluate whether students have learned a specific body of knowledge or acquire specific skill set or sample taught in a course, academic program, or content area A. Norm-reference B. Criterion-reference
B. Criterion-reference
72
refers to the accuracy A. Validity B. Reliability
A. Validity
73
● Repeated results are consistent A. Validity B. Reliability
B. Reliability
74
● “Consistency or accuracy with which the scores measure a particular cognitive ability of interest” - Ebel and Frisbie (1991) A. Validity B. Reliability
A. Validity
75
refers to the consistency of the measure or the repeatability A. Validity B. Reliability
B. Reliability
76
● Degree of relation between what the test measures and what is supposed to measure A. Validity B. Reliability
A. Validity
77
TRUE OR FALSE A reliable test is always valid but valid test may not be reliable
FALSE A test may be reliable but not valid but a valid test is always reliable
78
TRUE OR FALSE The test cannot be considered valid unless the measurements from it are reliable. Likewise, results from a test can be reliable and not necessarily valid.
TRUE
79
Determine the consistency of the raters. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
B. Inter-rater reliability
80
Determine the consistency of the test results across items. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
D. Split-half reliability
81
Determine the consistency of the test across time. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
A. Test-retest reliability
82
compares two different tests with the same content, quality and difficulty level that is administered to the same person. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
C. Parallel-forms reliability
83
The test is administered twice at a different point in time. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
A. Test-retest reliability
84
they compare and correlate the scores of two or more raters/judges (or the two teachers) A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
B. Inter-rater reliability
85
Determine the consistency of the test content. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
C. Parallel-forms reliability
86
we split the test into two halves. One half may be composed of even numbered questions the other half is composed of odd numbered questions. Administer it to each half, to the same individual and then you repeat to a large group of individuals. Then find the correlation between the scores for both halves. The higher the correlation between the two halves the higher the consistency of the test. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
D. Split-half reliability
87
a person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. A. Test-retest reliability B. Inter-rater reliability C. Parallel-forms reliability D. Split-half reliability
A. Test-retest reliability
88
Reliability refers to the consistency of the measure or the repeatability A. Validity B. Reliability
B. Reliability
89
Deals with whether the assessment is measuring the correct construct (trait/attribute/ability/skill) A. Criterion Validity B. Content Validity C. Construct Validity
C. Construct Validity
90
Deals with whether the assessment scores obtained for participants are related to a criterion outcome measure A. Criterion Validity B. Content Validity C. Construct Validity
A. Criterion Validity
91
The test item must include the factors that make up psychological constructs like intelligence, critical thinking, reading comprehension, or mathematical aptitude. A. Criterion Validity B. Content Validity C. Construct Validity
C. Construct Validity
92
○ The general description of the student’s performance A. Criterion Validity B. Content Validity C. Construct Validity
C. Construct Validity
93
● Deals with whether the assessment content and composition is appropriate given what is being measured (e.g., does the test reflect the knowledge/skills required to do a job or demonstrate that one grasps the course material) A. Criterion Validity B. Content Validity C. Construct Validity
B. Content Validity
94
The test to be measured is compared to the accepted standards A. Criterion Validity B. Content Validity C. Construct Validity
A. Criterion Validity
95
Related to but not to be confused with "face validity" A. Criterion Validity B. Content Validity C. Construct Validity
B. Content Validity
96
Steps in constructing a test
Purpose of the test Learning outcomes to be measured in terms of specific, observable behavior Outline the topics to be measured TOS as a bases for preparing a test Type of test Length of test
97
(5) purposes of TOS
● Serves as a guide for teachers to translate their instructional objectives. ● A guide to the item construction on the relative importance of each component. ● Improves the validity of the teachers’ evaluation. ● Useful for teachers to support their proficient judgment in creating a test. ● Provides a framework for organizing information.
98
How to construct a TOS
1. Determine the desired number of items. 2. List the topics with the corresponding allocation of time (weights). 3. Determine the length of time spent in teaching for each essential topic. 4. Determine the total number of items per topic. 5. Determine the percentage allocation per domain.
99
● Composed of two (2) sets of terms, events, phrases, definitions, statements, etc. ● The two sets can be interchanged. A. True or False B. Multiple Choice C. Matching Type D. Essay
C. Matching Type
100
● Measures the extent of the students judging the truth or false A. True or False B. Multiple Choice C. Matching Type D. Essay
A. True or False
101
● Assesses students the essential outcomes A. True or False B. Multiple Choice C. Matching Type D. Essay
A. True or False
102
● Each premise matches with the corresponding response A. True or False B. Multiple Choice C. Matching Type D. Essay
C. Matching Type
103
● A choice test if you want to go beyond recall of information A. True or False B. Multiple Choice C. Matching Type D. Essay
A. True or False
104
Easily prepared and covers a wide range of topics A. True or False B. Multiple Choice C. Matching Type D. Essay
A. True or False
105
● Measures the student's ability to analyze, synthesize, or evaluate ● Subjective type of test A. True or False B. Multiple Choice C. Matching Type D. Essay
D. Essay
106
● A factual statement ● Provides a stem that may be a question or an incomplete statement A. True or False B. Multiple Choice C. Matching Type D. Essay
B. Multiple Choice
107
● Form of descriptive, explanatory, discussion, comparison, illustrative, and criticism, etc. A. True or False B. Multiple Choice C. Matching Type D. Essay
D. Essay
108
multiple choice consists of how many maximum responses
5
109
● Consists of four or five alternative responses (maximum of 5 responses only) A. True or False B. Multiple Choice C. Matching Type D. Essay
B. Multiple Choice
110
Rules in constructing True of False items
Do not provide extraneous clues (All, never, absolutely, none, May, perhaps, sometimes, could) must be irrevocably true or false. avoid the use of negative statements and double negatives. ● Limit statements to a single significant concept. ● short and use simple language structure. ● Opinion statements should be attributed to some source. ● The item is worded concisely.
111
Rules in constructing True of False items
Do not provide extraneous clues (All, never, absolutely, none, May, perhaps, sometimes, could) must be irrevocably true or false. avoid the use of negative statements and double negatives. ● Limit statements to a single significant concept. ● short and use simple language structure. ● Opinion statements should be attributed to some source. ● The item is worded concisely.
112
Rules in constructing MULTIPLE CHOICE
● Avoid tricky questions. ● All items should be independent. ● The stem should only contain one main idea. ● State the stem in positive form, whenever possible. ○ Avoid negative statements. ● Sparingly use the negative word. and UNDERLINE ● The distractors must be plausible and attractive to the uninformed. ● State the problem clearly in the stem/question. ● All questions should be relevant and not far-fetched. ● The instruction must be clear, unambiguous and precise. ● The vocabulary level and the difficulty of the item should correspond to the level of the learners. ● Avoid the use of clues that can answer the other test item.
113
Rules in constructing matching type
● Choose homogenous material for each set item of the matching cluster. ● Keep the list of items relatively short. ● Avoid “perfect matching”. (5 questions = 5 answers) ● Some premises are larger or smaller than the responses. ● Arranging the premises or responses in alphabetical order prevents giving away clues. ● Use the longer phrases for premises, the shorter for responses.
114
Rules in constructing essay
● Be specific. ● Instructions must be clear, unambiguous, and precise. ● Questions must be clear to measure the learning outcomes ● Use appropriate verbs for the expected level of thinking. ● Allow sufficient time and indicate time limits for every question.
115
● Provides a feedback mechanism on the performance of a certain task, assignment or group project
Rubric
116
E.g. checklists, simple rating scale, holistic rating scale, task specific A. Analytic rubrics B. Holistic rubrics
B. Holistic rubrics
117
Is more product-oriented A. Analytic rubrics B. Holistic rubrics
B. Holistic rubrics
118
E.g. detailed rating scale, combination rubrics, total points/analytic rubrics A. Analytic rubrics B. Holistic rubrics
A. Analytic rubrics
119
Rates an activity in its entirety without regard to the separate pieces A. Analytic rubrics B. Holistic rubrics
B. Holistic rubrics
120
Separates pieces of an activity individually and then adds all scores for a total rating A. Analytic rubrics B. Holistic rubrics
A. Analytic rubrics
121
Is used when the components of an activity are too interrelated for easy division A. Analytic rubrics B. Holistic rubrics
B. Holistic rubrics
122
includes method for both detailed feedback and bigger-picture evaluation A. Detailed Rating Scale B. Combination Rubrics C. Total points/Analytic rubrics
B. Combination Rubrics
123
specific details that are marked to indicate strengths and weaknesses A. Detailed Rating Scale B. Combination Rubrics C. Total points/Analytic rubrics
C. Total points/Analytic rubrics
124
Steps to create a rubric (4)
Identify task and define goals Determine the criteria or dimensions of quality Identify the performance level or levels of mastery Write descriptors for each level
125
makes sure of the level of difficulty of each item, or determine if the wording is confusing.
Item analysis
126
Identifies the strength and weaknesses of the course topics that needs improvement on the student’s ability and the instructor’s performance Varies for every activity or task to be accomplished
RUBRIC
127
This helps in determining whether a certain topic needs reinforcement or not.
Item analysis
128
valuable, relatively easy, and it is the procedure that teachers can use to answer the list of questions below: ■ Is the wording of the question confusing? ■ Are the answer options unclear? ■ Were students given the right content to learn to successfully answer this question? ■ Was the content to learn easily accessible and clear?
Item analysis
129
● A statistical test that is important in the development of test questions ● It evaluates test based on its item quality, item’s test and relationship between and among test items
Item analysis
130
● Determines the ability of a student against a group of students ● Also identifies distractors that do not function and serve its purpose ● It is essentially appropriate in multiple choice format for test questions
Item analysis
131
● Used as a norm-referenced (applied for pre-test data) and criterion-referenced test (applied for post-test data)
Item analysis
132
● Reviews the students’ understanding and ability to answer a question or item which may dictate the quality of each test item. ● Measures students’ knowledge or comprehension accurately.
Purpose of Item analysis
133
● Assess a question for its value to be reused in a later test or should they be eliminated ● Serves as a guide for improving the test construction skills of an instructor. ● Leads to certain parts of the lesson that need certain emphasis for the learners.
Purpose of Item analysis
134
QUALITY OF TEST ITEMS (3)
● Effectiveness of Distractors ● Index of Discrimination (point biserial) ● Index of Difficulty (p-value)
135
TRUE OR FALSE The higher the discrimination index, the better the item
TRUE
136
TRUE OR FALSE If the item has discrimnation below zero (0), this indicates that the problem is good
FALSE If the item has discrimnation below zero (0), this also indicates that there is a problem
137
TRUE OR FALSE Items that are too hard or poorly written may result in a negative discrimination index
TRUE
138
discrimination value 0 = between 0 and +1 = between -1 and 0 =
0 = NO discrimination value between 0 and +1 = positive discrimination between -1 and 0 = negative discrimination
139
● Determines if the students have learned the concept being tested A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
C. Index of difficulty
140
● Determines the effectiveness of the incorrect answers to the quality of multiple-choice items/answers. A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
A. Effectiveness of Distractors
141
● Measures the percentage of students who get the item correctly A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
C. Index of difficulty
142
● Measures the relationship between students’ performance on a test item and the overall score of students A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
B. Index of discrimination
143
● Incorrect answers are considered acceptable if several students select it, but if students fail to select the incorrect answer, then it is considered implausible, hence the test item is too easy A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
A. Effectiveness of Distractors
144
relationship between the high scorer and the low scorer A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
B. Index of discrimination
145
TRUE OR FALSE Index of discrimination may be positive if more students in the low group got the correct answer and negative if more students in the high group got the correct answer
FALSE Index of discrimination may be positive if more students in the high group got the correct answer and negative if more students in the low group got the correct answer
146
● The item difficulty is simply the mean score of each item and the relative frequency of choosing the item correctly. A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
C. Index of difficulty
147
TRUE OR FALSE A good distractor attracts all students
FALSE ● A good distractor attracts students in the lower group than the upper group
148
has a proportion range from 0 to a high of +1.0. A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
C. Index of difficulty
149
● This refers to how well the test differentiates between high and low scorers A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
B. Index of discrimination
150
Has proportion that ranges from +1 to -1 A. Effectiveness of Distractors B. Index of discrimination C. Index of difficulty
B. Index of discrimination
151
TRUE OR FALSE The higher the discrimination index, the better the item
TRUE
152
positive discrimination index indicates negative discrimination index indicates that most of the low-scoring students got the item correctly than the high-scoring students
that most of the high-scoring students got the item correctly than the low-scoring students that most of the low-scoring students got the item correctly than the high-scoring students
153
TRUE OR FALSE A higher difficulty index indicates an easier item. A lower difficulty index indicates a difficult item.
true
154
The quantity is mentioned but is not making a value judgment on the performance of student A. Measurement B. Assessment C. Evaluation
A. Measurement