Chapter 12 Flashcards

1
Q

Assessment

A

The process of observing a sample of students’ behaviour and drawing inferences about their knowledge and abilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Informal assessment

A

Assessment that results from teachers’ spontaneous day-to-day observations of how students behave and perform in class

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Formal assessment

A

A systematic attempt to determine what students have learned. It is typically planned in advance and used for a specific purpose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Paper-pencil assessment

A

Assessment in which students provide written responses to written items (ex. test/exam)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Performance assessment

A

Assessment in which students demonstrate their knowledge and skills in a non-written fashion (ex. oral presentation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Traditional assessment

A

Assessment that focuses on measuring basic knowledge and skills in relative isolation from tasks more typical of the outside world (ex. tests/quizzes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Authentic assessment

A

Assessment of students’ knowledge and skills in an authentic, “real-life” context that is an integral part of instruction rather than a separate activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Standardized test

A

A test developed by test construction experts and published for use in many different schools and classrooms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Teacher-developed assessment instrument

A

An assessment tool developed by an individual teacher for use in their own classroom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Formative evaluation

A

An evaluation conducted during instruction to facilitate students’ learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Summative evaluation

A

An evaluation conducted after instruction is completed and used to assess students’ final achievement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do assessments in the classroom promote learning?

A
  • help to motivate students to learn the material
  • mechanisms for review
  • Influences on cognitive processing
  • learning experiences
  • provides feedback
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RSVP characteristics

A

Reliability
Standardization
Validity
Practicality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Reliability

A

The extent to which an assessment instrument yields consistent information about the knowledge, skills or abilities one is trying to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some factors that affect reliability?

A
  • day-to-day changes in students
  • variations in the physical environment
  • variations in administration of assessment
  • characteristics of the assessment instrument
  • subjectivity in scoring
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test-re-test reliability

A

The degree to which the instrument yields similar information over a short time interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Scorer reliability

A

The degree to which different experts are likely to agree in their assessment of complex behaviours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Internal consistency reliability

A

The extent to which different parts of the instrument are all measuring the same characteristic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Reliability coefficient

A

A numerical index of an assessment tool’s reliability-ranges from 0-1, with higher numbers indicating higher reliability (also known as a correlation coefficient)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Standard error of measurement

A

A statistic estimating the amount of error likely to be present in a particular score on a test or other assessment instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Confidence interval

A

A range around an assessment score reflecting the amount of error likely to be affecting the scores accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Standardization

A

The extent to which assessment instruments and procedures involve similar content and format and are administered and scored in the same way for everyone (increases reliability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Validity

A

The extent to which an assessment instrument actually measures what it is intended to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Content validity

A

The extent to which an assessment includes a representative sample of tasks within the content domain being assessed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Table of specifications

A

A two-way grid that indicates both the topics to be covered in an assessment and the things that students should be able to do with each topic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Predictive validity

A

The extent to which the results of an assessment predict future performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Validity coefficient

A

A numerical index of an assessment tool’s predictive validity; ranges from 0-1, with higher numbers indicating more accurate predictions

28
Q

Construct validity

A

The extent to which an assessment accurately measures an unobservable educational or psychological characteristic

29
Q

Practicality

A

The extent to which an assessment instrument or procedure is inexpensive and easy to use and takes only a small amount of time to administer and score

30
Q

Criterion-referenced score

A

A test score that specifically indicates what students know and can do

31
Q

Norm-referenced score

A

A score that indicates how a student’s performance on an assessment compares with the average performance of other students

32
Q

Norms

A

As related to socialization, society’s rules for acceptable and unacceptable practice. As related to testing practice, data regarding the typical performance of various groups of students on a standardized test or other norm-referenced assessment

33
Q

Mean

A

The average of a set of scores which is calculated by adding all the scores and dividing by the total number of people who have obtained those scores

34
Q

Standard deviation

A

A statistic that reflects how close together or far apart a set of scores are, indicating the validity of the scores

35
Q

Normal distribution

A

A theoretical pattern of educational and psychological characteristics in which most individuals lie somewhere in the middle range and few at either extremes

36
Q

Percentile rank

A

A test score that indicates the percentage of people in the norm group getting a raw score less than or equal to a particular student’s raw score

37
Q

Standard score

A

A test score that indicates how far a student’s performance is from the mean with respect to standard deviation units

38
Q

Confidence interval

A

Error of an assessment score accuracy (related to “percentile” calculation)

39
Q

Grade equivalent score

A

A test score that indicates that grade level of students to whom a student’s test performance is most similar

40
Q

Age equivalent score

A

A test score that indicates the age level of students to whom a student’s test performance is most similar

41
Q

High-stakes testing

A

Where a single test, exam or assignment will singularly determine the interpretation of student performance (ex. EQAO testing)

42
Q

Achievement tests

A

To assess how much students have learned from what they have specifically been taught

43
Q

Ability tests

A

A test designed to assess one’s general capacity to learn; typically used to predict students’ success in future learning situations (ex. IQ tests)

44
Q

Specific aptitude tests

A

A test designed to predict students’ ability to learn in a particular content domain

45
Q

IQ scores

A

A score on an intelligence test, determined by comparing one’s performance on the test with the performance of others in the same age group; for most tests, it is a standard score with a mean of 100 and a standard deviation of 15

46
Q

Stanine

A

A standard score with a mean of 5 and a standard deviation of 2; it is always reported as a whole number

47
Q

z score

A

A standard score with a mean of 0 and a standard deviation of 1

48
Q

T-score

A

A standard score with a mean of 50 and a standard deviation of 10

49
Q

Test anxiety

A

Excessive anxiety about a particular test of about assessment in general

50
Q

Cultural bias

A

The extent to which the items or tasks of an assessment instrument either offend or unfairly penalize students because of their ethnicity, sex or socioeconomic status

51
Q

Testwiseness

A

Test-taking know-how that enhances test performance (knowing how to study effectively, using time efficiently, deductive reasoning etc.)

52
Q

How can we accommodate students with special needs on classroom assessments?

A
  • modify the presentation format
  • modify the response format
  • modify the timing
  • modifying the assessment setting
  • administering part but not all of an instrument
  • using different assessment instruments to be more compatible with students’ ability levels and needs
53
Q

Halo effect

A

A phenomenon where people are more likely to perceive positive behaviours in a person they like or admire

54
Q

Horns effect

A

Expecting inappropriate behaviour from a student with a history of misbehaviour

55
Q

Recognition task

A

A memory task in which one must identify correct information among irrelevant information or incorrect statements (ex. multiple choice, true/false questions, matching etc.)

56
Q

Recall task

A

A memory task in which one must retrieve information in its entirety from long-term memory (ex. essays, word problems)

57
Q

Products vs. processes

A

You can look at tangible products that students have created or at the specific processes and behaviours that students exhibit (ex. oral presentation, playing an instrument etc.)

58
Q

Individual vs. group performance

A

Teachers may consider individual students’ behaviours and achievements or instead, the entire groups accomplishment

59
Q

Restricted vs. extended performance

A

Short performance tasks and performance that is exhibited over days and weeks

60
Q

Static vs. dynamic assessment

A

Static indicators focus on identifying students’ existing abilities and achievements while dynamic assessment indicates what students are likely to be able to accomplish with appropriate structure and guidance

61
Q

Checklist

A

An assessment tool with which a teacher evaluates student performance by indicating whether specific behaviours or qualities are absent or present

62
Q

Rating scale

A

An assessment tool with which a teacher evaluates student performance by rating aspects of the performance on one or more continua

63
Q

Analytic scoring

A

Scoring students’ performance on an assessment by evaluating various aspects of their performance separately
-useful in conducting formative evaluations and promoting students’ learning (use checklists/rating scales)

64
Q

Holistic scoring

A

Summarizing students’ performance on an assessment with a single score
-often used in summative evaluations

65
Q

Item analysis

A

An analysis of students’ responses to the individual items of an assessment instrument; used to identify possibly flawed items

66
Q

Item difficulty

A

The proportion of students getting a particular assessment item correct (indicates difficulty level of each item)

67
Q

Item discrimination

A

The relative proportion of high-scoring and low-scoring students getting a particular item correct