Chapter 12 Flashcards
Assessment
The process of observing a sample of students’ behaviour and drawing inferences about their knowledge and abilities
Informal assessment
Assessment that results from teachers’ spontaneous day-to-day observations of how students behave and perform in class
Formal assessment
A systematic attempt to determine what students have learned. It is typically planned in advance and used for a specific purpose
Paper-pencil assessment
Assessment in which students provide written responses to written items (ex. test/exam)
Performance assessment
Assessment in which students demonstrate their knowledge and skills in a non-written fashion (ex. oral presentation)
Traditional assessment
Assessment that focuses on measuring basic knowledge and skills in relative isolation from tasks more typical of the outside world (ex. tests/quizzes)
Authentic assessment
Assessment of students’ knowledge and skills in an authentic, “real-life” context that is an integral part of instruction rather than a separate activity
Standardized test
A test developed by test construction experts and published for use in many different schools and classrooms
Teacher-developed assessment instrument
An assessment tool developed by an individual teacher for use in their own classroom
Formative evaluation
An evaluation conducted during instruction to facilitate students’ learning
Summative evaluation
An evaluation conducted after instruction is completed and used to assess students’ final achievement
How do assessments in the classroom promote learning?
- help to motivate students to learn the material
- mechanisms for review
- Influences on cognitive processing
- learning experiences
- provides feedback
RSVP characteristics
Reliability
Standardization
Validity
Practicality
Reliability
The extent to which an assessment instrument yields consistent information about the knowledge, skills or abilities one is trying to measure
What are some factors that affect reliability?
- day-to-day changes in students
- variations in the physical environment
- variations in administration of assessment
- characteristics of the assessment instrument
- subjectivity in scoring
Test-re-test reliability
The degree to which the instrument yields similar information over a short time interval
Scorer reliability
The degree to which different experts are likely to agree in their assessment of complex behaviours
Internal consistency reliability
The extent to which different parts of the instrument are all measuring the same characteristic
Reliability coefficient
A numerical index of an assessment tool’s reliability-ranges from 0-1, with higher numbers indicating higher reliability (also known as a correlation coefficient)
Standard error of measurement
A statistic estimating the amount of error likely to be present in a particular score on a test or other assessment instrument
Confidence interval
A range around an assessment score reflecting the amount of error likely to be affecting the scores accuracy
Standardization
The extent to which assessment instruments and procedures involve similar content and format and are administered and scored in the same way for everyone (increases reliability)
Validity
The extent to which an assessment instrument actually measures what it is intended to measure
Content validity
The extent to which an assessment includes a representative sample of tasks within the content domain being assessed
Table of specifications
A two-way grid that indicates both the topics to be covered in an assessment and the things that students should be able to do with each topic
Predictive validity
The extent to which the results of an assessment predict future performance