Vocab Flashcards

1
Q

Reliability

A

Consistency of an assessment instrument’s data across repeated administrations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Internal Consistence Reliability

A

Consistency of test items with one another bu measuring the same quantity/construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Intra-rater Consistency

A

Individuals consistency in rating responses to various test items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validity

A

Whether a test measures what it claims/intends to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Content Validity

A

Trst that includes items representing the complete range of possible items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Construct Validity

A

When test’s scores measure the construct they are meant to measure such as intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Criterion Validity

A

Test’s scores effectively measure a construct according to established criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Concurrent Validity

A

(type of criterion validity) a test measures the criterion and the construct at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Predictive Validity

A

(type of criterion validity) means test scores effectively predict future outcomes, as when aptitude tests predict future subject grades

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Generalizability

A

the consistency of test scores over repeated administrations

(The results of one test can be generalized to apply to other tests with similar formats, content, and operations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Compensatory Grading

A

the practice of balancing out lower performance in one area or subject with higher performance in another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Noncompensatory grading

A

does not permit balancing of lower perfoemance in one subject with higher performance in another, but requires a similar standard of achievement in each area or subject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Cut score

A

a predetermined number used to divide categories of data or results from a test instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Standard Deviation

A

measures variability within a set of number
When interpreting assessment results it measures how much scores among a group of test-takers vary around the mean/average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Standard Score (z score)

A

represents the amount whereby an individual score deviates from the mean, measured in SDs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Domain

A

the identified scope of expected learning to be assessed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Item Response Theory

A

Performance on a test item is attributed to three influences
The item itself
The Test-taker
The interaction between the two

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Mean

A

the average of a group of numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Median

A

Center-most score in a group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Mode

A

most frequent score in a set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Positive Skew

A

When the majority of a group of numbers, such as test scores, is concentrated toward the high end of the range/distribution with the minority “tail” of scores near the low end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Negative Skew

A

When the majority of scores is bunched near the lower end of the distribution, with the minority “tail” near the high end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Normal Curve/Bell Curve

A

resembles the shape of a bell because the largest number of scores collected around the center mean and the numbers of scores descending as they move away from the center a mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Standard Error of Measurement

A

If an individual student took a lot of tests that were similar in size or length, the assessors can estimate how much that students score will vary

25
Q

Standard Error of Mean

A

the estimate of variance around the mean of a groups test scores

26
Q

Confidence Interval

A

used by staticians to express the range wherein a true or real score is situated

Purpose is to acknowledge and address the fact that the measurement of a students performance contains “noise” (interfering/confounding variables)

Giving a confidence interval shows the probability that a students true score is within the range defined by the interval

27
Q

Cut score

A

Used by educators to determine which scores above it are passing and which are failing

28
Q

Nedelsky Method

A

Used to establish a cut score, the assessor identifies a “borderline” group of students (those who do not always pass or fail but tend to score on the borderline of passing/failing) then estimates how many of these “borderline” student will probably answer a given test item correctly

29
Q

Angoff Method

A

used to establish a cu score, the assessor selects a group of “borderline” students and estimates which choice(s) in a multiple choice test item these students could eliminate as an incorrect answer and what percent of the choices left these students would guess as correct

30
Q

Modified Angoff procedure

A

Assessors also estimate how many of the students would fail the test, and if they deem it necessary, they modify their estimations to produce a number of failures they find more reasonable

31
Q

Ebel Method

A

used to determine an appropriate cut score for a particular instrument

Considers the importance and difficulty level of each test item in establishing a cut score for a test

32
Q

Hofstee Method/Compromise Method

A

addresses the difference between norm-referenced and criterion-referenced tests

tests that compare individual student scores to the average score scores of a normative sample of students found representative of the larger population versus tests that compare student scores to a pre-established criterion of achievement

This method allows them to determine how many items students could miss and how many failed items would affect the number of students who could fail

33
Q

norm-reference test

A

compare students scores to a normative sample of students deemed representaive of the general population

these tests seek to determine the highest or lowest achievement rather than the absolute score achieves

34
Q

Criterion-referenced test

A

may or may not be standardized and compare student scores to a predetermined set of criteria for acceptable performance

35
Q

Formative Assessments

A

given during a lesson, unit, course, or program to give the teachers and students an idea of how well each student is learning what the teacher has planned and expected for them to learn

Results are used to explain to each student his/her strengths, weaknesses, and how he/she can build on the strengths or improve weaknesses

Results also used to report progress to parents, administrators, etc

CFUS

36
Q

Summative Assessments

A

Given after a lesson, unit, course, or program has been completed to determine whether the student has passed the segment of instruction

helps teacher to determine whether they need to repeat the instruction or can move on to successive segments

summative assessments apply to lessons or units within a class, to coureses in a subject, to promotion from one grade level to the next and to graduation

Exit tickets, quizzes, etc

37
Q

Item Analysis

A

Used to evaluate test items that use multiple-choice formats to show the quality of the test item and of the test overall

Has an implicit orientation of being normed-referenced rather than criterion or domain referenced

Evaluates test items using performance within the group of test-takers rather than an externally present citerion for expected achievement

38
Q

Discrimination Index

A

Measure of how well a specific test item can separate students who generally score high on the test from students who generally score low on it

39
Q

Difficulty Index

A

Simple measure of how difficult a test item is considered

Obtained by calculating the percentage of all students taking a test who answered a certain test item correctly

40
Q

Specificity

A

refers to how well a test identifies every member of a defined group

the more specific a test is, the more likely it will omit some individuals who should be included in that group

41
Q

ePortfolios

A

enabled by technology can support students internal motivation and autonomy by establishing online environments wherein others feel good about participating

42
Q

Working Memory

A

the ability to retain current information temporarily well enough to manipulate it, such as combining additional parts to form a coherent whole, as in understanding words in a sentence or paragraph

43
Q

Memory Span

A

measures the ability to recall information presented once, immediately and in correct sequence

44
Q

Associative Memory

A

the ability to recall one item from a previously learned (unrelated) pair when presented with the other item

45
Q

Ideational Fluency

A

the ability to generate many varied responses to one stimulus

46
Q

Processing Speed

A

the ability to perform easy or familiar cognitive operations quickly and automativally, especially when they require focused attention and concentration

47
Q

WJ Visual Matching

A

measures perceptual speed through finding, identifying, comparing and contrasting visual elements

Pattern recognition scanning, perceptual memory, and complex processing abilities

48
Q

Decision Speed

A

measures semantic processing, meaning the reaction time to a stimulus, requiring some encoding and mental manipulation

49
Q

Rapid Picture Naming

A

measures naming facility, meaning the ability of rapidly naming familiar presented things with namees retrieved from long term memory

50
Q

Pair Cancellation

A

measures the students ability to attend to and concentrate on presented stimuli

51
Q

Crystallized Intelligence WJ

A

the solidified knowledge that an individual has acquired from his or her culture through life experiences and formal/informal education

Measured on the WJ by its subtests of General Informationa dn Verbal Comprehension

52
Q

Fluid Intelligence WJ

A

Reasoning stands in contrast to crystallized intelligence or knowledge

This is the ability to solve novel problems by performing mental operations

Fluid reasoning is measured on the WJ by its Concept Formation subtests which tests inductive reasoning: the ability to relate a specific problem to a generalized, underlying rule or concept

53
Q

Spatial Relationships

A

the ability to perceive objects in space, their orientation, and visual patterns, and to maintain and manipulate these rapidly

54
Q

Visualization

A

the ability to match objects in space, including mentally manipulating them three-dimensionally more than once, regardless of response spped

55
Q

Spatial Scanning

A

involves quickly and accurately identifying paths through complex, large, visual, or spatial fields

56
Q

Auditory Processing

A

the ability to interpret sound signals from one’s sense of synthesis

57
Q

Incomplete Words

A

measuring phonetic coding for analysis

58
Q

Auditory Attention

A

measuring ideational fluency

59
Q

Phonetic coding

A

synthesis involved with putting sounds together meaningfully as in words