UNIT 4 Flashcards

1
Q

As teachers become more familiar with data-driven instruction, they are making decisions about what and how they teach based on the information gathered from their students.

A

o (1) Teachers first find out what their students know and what do they not know.

o (2) Teachers determine how to bridge that gap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

give value judgment to assessment through the qualitative measure of the prevailing situation

A. Measurement
B. Assessment
C. Evaluation

A

C. Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

determines desired dimensions of defined
characteristics

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

measures the performance of an individual from
a known objective

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

o involves quantitative amount of measurement
using standard instrument
o quantity is mentioned but not making a value
judgment on the performance of the student

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

o form of feedback on student’s learning
o assist a teacher in determining what, how much
and how well the students are learning

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

o analyzes the result obtained from measurement
based on predetermined standards

A. Measurement
B. Assessment
C. Evaluation

A

C. Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

raw score, percentile rank, and standard score

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

o measures the educational achievements of the
students

A. Measurement
B. Assessment
C. Evaluation

A

C. Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

useful arm in improving teaching and learning

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

quantitative instruments are used: tests,
aptitude tests, inventories, questionnaires, etc.

A. Measurement
B. Assessment
C. Evaluation

A

B. Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the 3 essential characteristics in measurement

A

Reliability
Validity
Objectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

TRUE OR FALSE

All assessments are test but not all tests are assessments

A

FALSE

All tests are assessments but not all assessments are test

Assessments can be through verbal and skills testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

TRUE OR FALSE

test is a special form of assessment and can
be given at the end of lesson or unit

A

TRUE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Content-oriented:

Objective-oriented:

A

Content-oriented: Measurement

Objective-oriented: Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

It is an end in itself:

It is a means, and not an end in itself:

A

It is an end in itself: Evaluation

It is a means, and not an end in itself: Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Deduce inferences from the evidence:

To gather evidence:

A

Deduce inferences from the evidence: Evaluation

To gather evidence: Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

TRUE OR FALSE

Measurement is an essential part of education

A

FALSE

Measurement is Not be an essential part of education

EVALUATION is Integrated or necessary part of education

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Acquaints with a situation:

Acquaints about the entire situation:

A

Acquaints with a situation: Measurement

Acquaints about the entire situation: Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

TRUE OR FALSE

Evaluation can be conducted any time

A

FALSE

MEasurement can be conducted any time

EVALUATION is a continuous process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

emphasis is upon single aspect of

subject matter achievement or specific skills and abilities.

A

Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

emphasis is upon broad personality changes and major objectives of an educational program.

A

Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

o systematic data-based test measures what and how will the students learn
o in determining student proficiency and mastery of the content and use for comparison against a certain standard

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

o spontaneous and flexible form of assessment that can easily be incorporated in day-to-day classroom activities and measure student’s performance and progress.

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Makes use of Rubrics

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

o norm-referenced by comparing different scores within the same group

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

o criterion-referenced is the process of evaluating and grading the learning of students against a pre-specified qualities or criteria without reference to the achievement of others

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

more likely to occur in a classroom learning environment that help teachers acquire information in a continuing and in formal bases.

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

can take place in any student-teacher interaction
potential to occur at any time and can involve whole
class, small group or one on one interaction

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Day-to-day activities
Qualitative

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

beyond the classroom environment

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Exams, diagnostics tests, achievement tests, aptitude tests, intelligence tests,
quizzes

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Checklist, observation, portfolio, rating scale,
records, interviews, journal writing

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Data-based test
knowledge testing

Systematic and structured

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Mathematically computed and summarized

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Progress of every student by using actual works

A. Formal Assessment
B. Informal Assessment

A

B. Informal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Used for comparison against a certain standard

A. Formal Assessment
B. Informal Assessment

A

A. Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Determine whether learning is taking place

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Provide information to the students, parents, and
administrators on the level of accomplishment attained

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Occur at the end of the instruction

Primarily retrospective

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Provide feedback on how things are going

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Concerned with purposes, progress, and outcomes of the teaching-learning
process

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Conducted during teaching or instruction

Primarily prospective

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Determine if learning is sufficiently complete

Determine how well things went

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Observation, oral questioning, assignments, quizzes, discussions, reflection,
research proposal, peer or self-assessment

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Help students perform well at the end of the program

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Evaluate the effectiveness and improvement of
teaching

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Unit test, final examinations, comprehensive projects,
research paper, presentations, project, portfolio

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Gathering of detailed information and narrow in
the scope of the content

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Gathering information is less detailed but broader in
the scope of content

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

○ Informing instruction
○ Designed to quickly inform instruction by providing
specific and immediate feedback through daily ongoing instructional strategies that are student and classroom-centered

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

○ Summary of learning
○ Provides a summary of student learning; this
assessment evaluates student learning, knowledge, proficiency, or success at the conclusion of an instructional period

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

○ Gives a final measure of value, effect, or quality

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

○ Helps students identify their strengths and weaknesses so they can perform well at the end of the program

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

○ Often completed at the end of summation of instruction,
intervention, or program activities

A. Formative Assessment
B. Summative Assessment

A

B. Summative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

○ Primarily prospective: shapes direction and feeds the process of ongoing adjustment and improvement
○ Occurs before or during instruction, the implementation of intervention, or a program

A. Formative Assessment
B. Summative Assessment

A

A. Formative Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Interpret scores in terms of absolute standard

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Emphasizes thinking and application of knowledge

Content-centered

Assess higher-level thinking and writing skills

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Relative ranking of students

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Percentile rank, normal curve

NSAT, College Entrance Examination, National Achievement Test, IQ
Test, Cognitive Ability test

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Measure how well students have mastered a particular body of
knowledge

A student competes against himself or herself

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Evaluate the effectiveness of the teaching program and
student’s preparedness

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

63
Q

Focus too heavily on memorization and routine
procedures

Examinee-centered

Highlight achievement differences

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

64
Q

Monitor students’ performance in their day-to-day activities

Setting performance standards

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

65
Q

Test item analysis

Domain-referenced tests, competency tests, basic
skills tests, mastery tests, performance or assessments, objective-referenced tests, authentic assessments, standards-based tests

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

66
Q

Students compete against each other

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

67
Q
○ Designed to compare and
ranked takers in relation to
one another
○ Determines students’ placement in a normal distribution
curve

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

68
Q

○ Measures student’s performance based on mastery of a specific set of skills
○ Measures what the student know and doesn’t know at a time of assessment

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

69
Q

○ They don’t have test problem solving, decision making, judgement or social skills
○ Based on judgements about examinees
○ Judges categories examinees according to performance
level

A. Norm-reference
B. Criterion-reference

A

A. Norm-reference

70
Q

Students performance is not compared to other student’s performance in the same assessment

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

71
Q

○ Evaluate whether students have learned a specific body of knowledge or acquire specific skill set or sample taught in a course, academic program, or content area

A. Norm-reference
B. Criterion-reference

A

B. Criterion-reference

72
Q

refers to the accuracy

A. Validity
B. Reliability

A

A. Validity

73
Q

● Repeated results are consistent

A. Validity
B. Reliability

A

B. Reliability

74
Q

● “Consistency or accuracy with which the scores measure a particular cognitive ability of interest” - Ebel and Frisbie (1991)

A. Validity
B. Reliability

A

A. Validity

75
Q

refers to the consistency of the measure or
the repeatability

A. Validity
B. Reliability

A

B. Reliability

76
Q

● Degree of relation between what the test measures and what is supposed to measure

A. Validity
B. Reliability

A

A. Validity

77
Q

TRUE OR FALSE

A reliable test is always valid but valid test may
not be reliable

A

FALSE

A test may be reliable but not valid but a valid test is always
reliable

78
Q

TRUE OR FALSE

The test cannot be considered valid unless the measurements from it are reliable. Likewise, results from a test can be reliable and not necessarily valid.

A

TRUE

79
Q

Determine the consistency of the raters.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

B. Inter-rater reliability

80
Q

Determine the consistency of the test results across items.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

D. Split-half reliability

81
Q

Determine the consistency of the test across time.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

A. Test-retest reliability

82
Q

compares two different tests with the same content, quality and difficulty level that is administered to the same person.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

C. Parallel-forms reliability

83
Q

The test is administered twice at a different point in time.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

A. Test-retest reliability

84
Q

they compare and correlate the scores of two or more raters/judges (or the two teachers)

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

B. Inter-rater reliability

85
Q

Determine the consistency of the test content.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

C. Parallel-forms reliability

86
Q

we split the test into two halves. One half may be composed of even numbered questions the other half is composed of odd numbered questions. Administer it to each half, to the same individual and then you repeat to a large group of individuals. Then find the correlation between the scores for both halves. The higher the correlation between the two halves the higher the consistency of the test.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

D. Split-half reliability

87
Q

a person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today.

A. Test-retest reliability
B. Inter-rater reliability
C. Parallel-forms reliability
D. Split-half reliability

A

A. Test-retest reliability

88
Q

Reliability refers to the consistency of the measure or the repeatability

A. Validity
B. Reliability

A

B. Reliability

89
Q

Deals with whether the assessment is measuring the correct construct (trait/attribute/ability/skill)

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

C. Construct Validity

90
Q

Deals with whether the assessment scores obtained for participants are related to a criterion outcome measure

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

A. Criterion Validity

91
Q

The test item must include the factors that make up psychological constructs like intelligence, critical thinking, reading comprehension, or mathematical
aptitude.

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

C. Construct Validity

92
Q

○ The general description of the student’s performance

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

C. Construct Validity

93
Q

● Deals with whether the assessment content and composition is appropriate given what is being measured (e.g., does the test reflect the knowledge/skills required to do a job or demonstrate that one grasps the course material)

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

B. Content Validity

94
Q

The test to be measured is compared to the accepted standards

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

A. Criterion Validity

95
Q

Related to but not to be confused with “face validity”

A. Criterion Validity
B. Content Validity
C. Construct Validity

A

B. Content Validity

96
Q

Steps in constructing a test

A

Purpose of the test

Learning outcomes to be measured in terms of specific, observable behavior

Outline the topics to be measured

TOS as a bases for preparing a test

Type of test

Length of test

97
Q

(5) purposes of TOS

A

● Serves as a guide for teachers to translate their instructional objectives.
● A guide to the item construction on the relative importance of each component.
● Improves the validity of the teachers’ evaluation.
● Useful for teachers to support their proficient judgment in
creating a test.
● Provides a framework for organizing information.

98
Q

How to construct a TOS

A
  1. Determine the desired number of items.
  2. List the topics with the corresponding allocation of time (weights).
  3. Determine the length of time spent in teaching for each essential topic.
  4. Determine the total number of items per topic.
  5. Determine the percentage allocation per domain.
99
Q

● Composed of two (2) sets of terms, events, phrases, definitions, statements, etc.
● The two sets can be interchanged.

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

C. Matching Type

100
Q

● Measures the extent of the students judging the truth or
false

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

A. True or False

101
Q

● Assesses students the essential outcomes

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

A. True or False

102
Q

● Each premise matches with the corresponding response

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

C. Matching Type

103
Q

● A choice test if you want to go beyond recall of information

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

A. True or False

104
Q

Easily prepared and covers a wide range of topics

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

A. True or False

105
Q

● Measures the student’s ability to analyze, synthesize, or evaluate
● Subjective type of test

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

D. Essay

106
Q

● A factual statement
● Provides a stem that may be a question or an incomplete
statement

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

B. Multiple Choice

107
Q

● Form of descriptive, explanatory, discussion, comparison, illustrative, and criticism, etc.

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

D. Essay

108
Q

multiple choice consists of how many maximum responses

A

5

109
Q

● Consists of four or five alternative responses (maximum of
5 responses only)

A. True or False
B. Multiple Choice
C. Matching Type
D. Essay

A

B. Multiple Choice

110
Q

Rules in constructing True of False items

A

Do not provide extraneous clues (All, never, absolutely, none, May, perhaps, sometimes, could)

must be irrevocably true or false.

avoid the use of negative statements and double
negatives.

● Limit statements to a single significant concept.
● short and use simple language structure.
● Opinion statements should be attributed to some source.
● The item is worded concisely.

111
Q

Rules in constructing True of False items

A

Do not provide extraneous clues (All, never, absolutely, none, May, perhaps, sometimes, could)

must be irrevocably true or false.

avoid the use of negative statements and double
negatives.

● Limit statements to a single significant concept.
● short and use simple language structure.
● Opinion statements should be attributed to some source.
● The item is worded concisely.

112
Q

Rules in constructing MULTIPLE CHOICE

A

● Avoid tricky questions.
● All items should be independent.
● The stem should only contain one main idea.
● State the stem in positive form, whenever possible.
○ Avoid negative statements.
● Sparingly use the negative word. and UNDERLINE
● The distractors must be plausible and attractive to the
uninformed.

● State the problem clearly in the stem/question.
● All questions should be relevant and not far-fetched.
● The instruction must be clear, unambiguous and precise.
● The vocabulary level and the difficulty of the item should
correspond to the level of the learners.
● Avoid the use of clues that can answer the other test item.

113
Q

Rules in constructing matching type

A

● Choose homogenous material for each set item of the matching cluster.
● Keep the list of items relatively short.
● Avoid “perfect matching”. (5 questions = 5 answers)
● Some premises are larger or smaller than the responses.
● Arranging the premises or responses in alphabetical order
prevents giving away clues.
● Use the longer phrases for premises, the shorter for
responses.

114
Q

Rules in constructing essay

A

● Be specific.
● Instructions must be clear, unambiguous, and precise.
● Questions must be clear to measure the learning outcomes
● Use appropriate verbs for the expected level of thinking.
● Allow sufficient time and indicate time limits for every question.

115
Q

● Provides a feedback mechanism on the performance of a certain task, assignment or group project

A

Rubric

116
Q

E.g. checklists, simple rating scale, holistic rating scale, task specific

A. Analytic rubrics
B. Holistic rubrics

A

B. Holistic rubrics

117
Q

Is more product-oriented

A. Analytic rubrics
B. Holistic rubrics

A

B. Holistic rubrics

118
Q

E.g. detailed rating scale, combination rubrics, total points/analytic rubrics

A. Analytic rubrics
B. Holistic rubrics

A

A. Analytic rubrics

119
Q

Rates an activity in its entirety without regard to the separate pieces

A. Analytic rubrics
B. Holistic rubrics

A

B. Holistic rubrics

120
Q

Separates pieces of an activity individually and then adds all scores for a total rating

A. Analytic rubrics
B. Holistic rubrics

A

A. Analytic rubrics

121
Q

Is used when the components of an activity are too interrelated for easy division

A. Analytic rubrics
B. Holistic rubrics

A

B. Holistic rubrics

122
Q

includes method for both detailed feedback and bigger-picture evaluation

A. Detailed Rating Scale
B. Combination Rubrics
C. Total points/Analytic rubrics

A

B. Combination Rubrics

123
Q

specific details that are marked to indicate strengths and weaknesses

A. Detailed Rating Scale
B. Combination Rubrics
C. Total points/Analytic rubrics

A

C. Total points/Analytic rubrics

124
Q

Steps to create a rubric (4)

A

Identify task and define goals

Determine the criteria or dimensions of quality

Identify the performance level or levels of mastery

Write descriptors for each level

125
Q

makes sure of the level of difficulty of each item, or determine if the wording is confusing.

A

Item analysis

126
Q

Identifies the strength and weaknesses of the course topics
that needs improvement on the student’s ability and the
instructor’s performance

Varies for every activity or task to be accomplished

A

RUBRIC

127
Q

This helps in determining whether a certain topic needs reinforcement or not.

A

Item analysis

128
Q

valuable, relatively easy, and it is the procedure that teachers can use to answer the list of questions below:
■ Is the wording of the question confusing?
■ Are the answer options unclear?
■ Were students given the right content to learn to
successfully answer this question?
■ Was the content to learn easily accessible and
clear?

A

Item analysis

129
Q

● A statistical test that is important in the development of test
questions
● It evaluates test based on its item quality, item’s test and
relationship between and among test items

A

Item analysis

130
Q

● Determines the ability of a student against a group of students
● Also identifies distractors that do not function and serve its purpose
● It is essentially appropriate in multiple choice format for test questions

A

Item analysis

131
Q

● Used as a norm-referenced (applied for pre-test data) and criterion-referenced test (applied for post-test data)

A

Item analysis

132
Q

● Reviews the students’ understanding and ability to answer a question or item which may dictate the quality of each test item.

● Measures students’ knowledge or comprehension accurately.

A

Purpose of Item analysis

133
Q

● Assess a question for its value to be reused in a later test or should they be eliminated
● Serves as a guide for improving the test construction skills of an instructor.
● Leads to certain parts of the lesson that need certain emphasis for the learners.

A

Purpose of Item analysis

134
Q

QUALITY OF TEST ITEMS (3)

A

● Effectiveness of Distractors
● Index of Discrimination (point biserial)
● Index of Difficulty (p-value)

135
Q

TRUE OR FALSE

The higher the discrimination index, the better the item

A

TRUE

136
Q

TRUE OR FALSE

If the item has discrimnation below zero (0), this indicates that the problem is good

A

FALSE

If the item has discrimnation below zero (0), this also indicates that there is a problem

137
Q

TRUE OR FALSE

Items that are too hard or poorly written may result in a negative discrimination index

A

TRUE

138
Q

discrimination value

0 =
between 0 and +1 =
between -1 and 0 =

A

0 = NO discrimination value

between 0 and +1 = positive discrimination

between -1 and 0 = negative discrimination

139
Q

● Determines if the students have learned the concept being tested

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

C. Index of difficulty

140
Q

● Determines the effectiveness of the incorrect answers to the quality of multiple-choice items/answers.

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

A. Effectiveness of Distractors

141
Q

● Measures the percentage of students who get the item correctly

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

C. Index of difficulty

142
Q

● Measures the relationship between students’ performance on a test item and the overall score of students

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

B. Index of discrimination

143
Q

● Incorrect answers are considered acceptable if several students select it, but if students fail to select the incorrect answer, then it is considered implausible, hence the test item is too easy

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

A. Effectiveness of Distractors

144
Q

relationship between the high scorer and the low scorer

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

B. Index of discrimination

145
Q

TRUE OR FALSE

Index of discrimination may be positive if more students in the low group got the correct answer and negative if more students in the high group got the correct answer

A

FALSE

Index of discrimination may be positive if more students in the high group got the correct answer and negative if more students in the low group got the correct answer

146
Q

● The item difficulty is simply the mean score of each item and the relative frequency of choosing the item correctly.

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

C. Index of difficulty

147
Q

TRUE OR FALSE

A good distractor attracts all students

A

FALSE

● A good distractor attracts students in the lower group than the upper group

148
Q

has a proportion range from 0 to a high of
+1.0.

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

C. Index of difficulty

149
Q

● This refers to how well the test differentiates between high and low scorers

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

B. Index of discrimination

150
Q

Has proportion that ranges from +1 to -1

A. Effectiveness of Distractors
B. Index of discrimination
C. Index of difficulty

A

B. Index of discrimination

151
Q

TRUE OR FALSE

The higher the discrimination index, the better the item

A

TRUE

152
Q

positive discrimination index indicates

negative discrimination index indicates that
most of the low-scoring students got the item correctly
than the high-scoring students

A

that most of the high-scoring students got the item correctly than the
low-scoring students

that most of the low-scoring students got the item correctly
than the high-scoring students

153
Q

TRUE OR FALSE

A higher difficulty index indicates an easier item. A lower difficulty index indicates a difficult item.

A

true

154
Q

The quantity is mentioned but is not making a value judgment on the performance of student

A. Measurement
B. Assessment
C. Evaluation

A

A. Measurement