ASSESSMENT AND EVALUATION Flashcards
defined as a
process of appraising
something or someone.
Assessment
made to
identify the level of
performance of an
individual.
Assessment
Assessment:________________ Evaluation:______________
Quantitative; Qualitative
Making judgment about the
value or worth of objects or
events based on some
measurement to arrive at a
decision.
Evaluation
Performed to determine the
degree to which goals are
attained.
Evaluation
a process of collecting, reviewing and using data, for the purpose of improvement in the current performance.
Assessment
described as an act of passing judgement on the basis of set of standards
Evaluation
Assessment by nature is…
Diagnostic
Evaluation by nature is…
Judgemental
Provides feedback on performance and areas of improvement
Assessment
Determines the extent to which objectives are achieved
Evaluation
Purpose of Assessment is…
Formative
Purpose of Evaluation is…
Summative
Orientation of Assessment…
Process Oriented
Orientation of Evaluation…
Product Oriented
a chart that spells out the content and level of
knowledge to be tested; it can be general or specified based on teacher’s
preference.
Examination blueprint
Factors that determines the number of items in the
Examination:
a. amount of material taught
b. type of test question used
c. amount of time available for testing
learner is compared with a reference
group of learners
Relative terms (norm reference)
learner is compared to a well defined
performance criteria
Absolute terms (criterion reference)
CRITERIA FOR SELECTION OF EVALUATIVE
DEVICES/TOOL:
Validity
Reliability
Objectivity
Relevance
Practicality
the degree of accuracy which a test measures what is
intends to measure.
Validity
adequacy with which the test item measured on
the areas.
Content validity
the extent to which a relationship exists
between the test scores and later success.
Predictive validity
extent to which a relationship exist between
test scores ad an accepted contemporary criterion of performance on
the variable the test is supposed to assess.
Concurrent validity
the consistency with which the test measures what it
intends to measure.
Reliability
Dimensions of reliability:
a. Examinee/students
b. Examiner/scorer
c. Test content
d. Time
e. Situation
a test is administered twice; the correlation between the scores is
an estimate of temporal reliability.
Test-retest method
two alternate or equivalent forms of a test are constructed and
administered and 1 3 weeks interval.
Alternate or equivalent forms of method
odd numbered and even numbered items are scored as separate
tests.
Split-half method
the degree of agreement between the judgment made by independent and competent observers as to whether or not a learner’s test performance meets the criteria stated in a learning objective.
Objectivity
the degree to which the criteria established for selection of questions conform with the aims of the measuring instrument.
Relevance
- the convenience in using the test instrument;
- refer to the development of evaluative device capable of being administered and scored with reasonable ease within the limits of time and of the resources imposed by circumstances.
Practicality
Consider practicality as to:
a. Test construction
b. Administration
c. Scoring
d. No. of examinees
KNOWLEDGE - Formal Evaluation Instruments:
Objective Examination
- Selection Type, Supply Type, Interpretative Exercises
Subjective Examination
- Free response, restricted response, project assignment
KNOWLEDGE - Non-Formal Evaluation Instruments:
Practical Exam
Oral Exam
observational reports
ATTITUDES - Direct Methods:
Questionnaires
Semantic Differential
Attitude Goals
Observation Rating Scale
ATTITUDES - Indirect Methods:
Test of judgment
Test of memory and perception
Information test
contains the stem, the key and a distractor
Multiple choice
contains premises/hypotheses, and the
responses/alternatives which contain the jokers.
Matching type
responses which do not match any premise
Jokers
there are only two possible answers,
that is true or false, correct or incorrect, right or wrong.
Alternate response
examinee is presented with a direct question
or an incomplete statement
Supply Type
Examples of Supply Type:
- Fill in the blanks
- Definition of terms
Interpretation of graphs, table, pictures, situational analysis
Interpretative Exercises
the learners perform the skills to be
evaluated; simulated or actual
Practical Exam
assigning a learner a task or project
to complete and then evaluate on the basis of the product of the completed performance.
Project assignment
usually a six step bipolar adjective
scale indicating direction and intensity
Semantic Differential
Likert Scale
Attitude Goals
observation at work of
cognitive and affective behavior and scale of observable behavior.
Observational Rating Scale
Free answer testing
Test of judgment
based on the assumption that what is perceived and remembered is influenced by one’s attitude.
Test of memory and perception
based on the assumptions that incases of uncertainty, people tend
to guess in the direction of their attitude.
Information test
– the assignment of numbers to objects or events according to logically accepted rules
Measurement
Five Basic components of Evaluation:
Audience
Purpose
Questions
Scope
Resources
is the persons or groups for whom the evaluation is being conducted
Audience
to decide whether to continue a particular education program or to determine the effectiveness of the teaching process
Purpose
directly related to the purpose, are specific, and measurable
Questions
determined in part by the purpose for conducting the evaluation and in part by available resources
Scope
include time, expertise, personnel, materials, equipment’s, and facilities
Resources
Evaluation Models:
to make adjustment in an educational activity, as soon as they are needed, whether those adjustments be in personnel, materials, facilities, learning objectives or even the health professional educator’s attitude.
Process (Formative) evaluation
Evaluation Models:
to determine whether learners have acquired the knowledge or skills taught during the learning experience
Content evaluation
Evaluation Models:
to determine the effects or outcomes of teaching efforts; its intent is to summarize what happened as a result of education
Outcome (Summative evaluation
Evaluation Models:
to determine the relative effects of education on the institution and the community; the purpose is to obtain information that will help decide whether continuing an educational activity is worth its cost.
Impact evaluation
Evaluation Models:
– designed and conducted to assist an audience to judge and improve the worth of some object/ educational program
Program evaluation
Components of Evaluation Design:
Evaluation Structure
Evaluation Method
Evaluation Instruments
all evaluations should be systematic and carefully and thoroughly planned before they are conducted
Evaluation Structure
include those actions that are undertaken to carry out the evaluation according to the design structure; all evaluation methods deal with data and data collection
Evaluation Methods
using existing instruments because instrument development requires expertise, time and expenditure of resources; requires rigorous testing for reliability and validity.
Evaluation Instruments
Three methods to minimize the effects of unexpected events:
- Conduct a pilot test first
- Include extra time
- Keep a sense of humor
– basis for interpreting the results obtained from a test
Reference system
may measure the acquisition of skills and knowledge from multiple sources such as notes, texts and syllabi.
Norm-reference system/relative standard
measure performance on specific concepts and are often used in a pre-test/post-test format
Criterion reference system/absolute standard
refers to the process of adjusting student grades in order to ensure that a test or assignment has the proper distribution throughout the class
a. grading on a curve (SD
the scores are added up and divided by the number of scores. The mean is sensitive to extreme scores when population samples are small.
combining scores to obtain mean score
calculated based on the number of questions correct out of the MPL required, not the total number of questions
calculating minimum pass level (MPL)
an important tool to increase test effectiveness. It provides statistics on overall performance, test quality, and individual questions. Each items contribution is analyzed and assessed. To write effective items, it is necessary to examine whether they are measuring the fact, idea, or concept for which they were intended.
item analysis