Methods and Statistics used in Research Studies and Test Construction Flashcards

1
Q

an umbrella term for all that goes into the process of creating a test

A

Test Development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

brainstorming of ideas about what kind of test a developer wants to publish
- stage wherein the ff. is determined: construct, goal, user, taker, administration, format, response, benefits, costs, interpretation
- determines whether the test would be norm-referenced or criterion-referenced

A

I. Test Conceptualization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

preliminary research surrounding the creation of a prototype of the test

A

Pilot Work/Pilot Study/Pilot Research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

stage in the process that entails writing test items, revisions, formatting, setting scoring rules
- it is not good to create an item that contains numerous ideas

A

II. Test Construction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

reservoir or well from which the items will or will not be drawn for the final version of the test

A

Item Pool

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

relatively large and easily accessible collection of test questions

A

Item Banks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

refers to an interactive, computer administered test-taking process wherein items presented to the testtaker are based in part on the testtaker’s performance on previous items

A

Computerized Adaptive Testing:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

occurs when there is some lower limit on a survey or questionnaire and a large percentage of respondents score near this lower limit (testtakers have low scores)

A

Floor Effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

occurs when there is some upper limit on a survey or questionnaire and a large percentage of respondents score near this upper limit (testtakers have high scores)

A

Ceiling Effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ability of the computer to tailor the content and order of presentation of items on the basis of responses to previous items

A

Item Branching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

form, plan, structure, arrangement, and layout of individual test items

A

Item Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

offers two alternatives for each item

A

Dichotomous Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

each item has more than two alternatives

A

Polychotomous Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

a format where respondents are asked to rate a construct

A

Category Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

subject receives a longlist of adjectives and indicates whether each one if characteristic of himself or herself

A

Checklist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

items are arranged from weaker to stronger expressions of attitude, belief, or feelings

A

Guttman Scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

require testtakers to select response from a set of alternative responses

A

Selected-Response Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Has three elements: stem (question), a correct option, and several incorrect alternatives (distractors or foils), Should’ve one correct answer, has grammatically parallel alternatives, similar length, alternatives that fit grammatically with the stem, avoid ridiculous distractors, not excessively long, “all of the above”, “none of the above” (25%)

A

Multiple Choice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

a distractor that was chosen equally by both high and low performing groups that enhances the consistency of test results

A

Effective Distractors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

may hurt the reliability of the test because they are time consuming to read and can limit the no. of good items

A

Ineffective Distractors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

less likely to be chosen, may affect the reliability of the test bec the testtakers may guess from the remaining options

A

Cute Distractors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Test taker is presented with two columns: Premises and Responses

A

Matching Item

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Usually takes the form of a sentence that requires the testtaker to indicate whether the statement is or is not a fact (50%)

A

Binary Choice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

requires testtakers to supply or to create the correct answer, not merely selecting it

A

Constructed-Response Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Requires the examinee to provide a word or phrase that completes a sentence
Completion Item
26
Should be written clearly enough that the testtaker can respond succinctly, with short answer
Short-Answer
27
allows creative integration and expression of the material
Essay
28
process of setting rules for assigning numbers in measurement
Scaling
29
- involve classification or categorization based on one or more distinguishing characteristics - Label and categorize observations but do not make any quantitative distinctions between observations - mode
Nominal
30
rank ordering on some characteristics is also permissible -median
Ordinal
31
contains equal intervals, has no absolute zero point (even negative values have interpretation to it)
Ratio
32
- has true zero point (if the score is zero, it means none/null) Easiest to manipulate
Interval
33
- produces ordinal data by presenting with pairs of two stimuli which they are asked to compare - respondent is presented with two objects at a time and asked to select one object according to some criterion
Paired Comparison
34
– respondents are presented with several items simultaneously and asked to rank them in order or priority
Rank Order
35
– respondents are asked to allocate a constant sum of units, such as points, among set of stimulus objects with respect to some criterion
Constant Sum
36
– sort object based on similarity with respect to some criterion
Q-Sort Technique
37
– rate the objects by placing a mark at the appropriate position on a continuous line that runs from one extreme of the criterion variable to the other - e.g., Rating Guardians of the Galaxy as the best Marvel Movie of Phase 4
Continuous Rating
38
– having numbers or brief descriptions associated with each category - e.g., 1 if your like the item the most, 2 if so-so, 3 if you hate it
Itemized Rating
39
– indicate their own attitudes by checking how strongly they agree or disagree with carefully worded statements that range from very positive to very negative towards attitudinal object - principle of measuring attitudes by asking people to respond to a series of statements about a topic, in terms of the extent to which they agree with them
Likert Scale
40
– a 100-mm line that allows subjects to express the magnitude of an experience or belief
Visual Analogue Scale
41
– derive respondent’s attitude towards the given object by asking him to select an appropriate position on a scale between two bipolar opposites
Semantic Differential Scale
42
– developed to measure the direction and intensity of an attitude simultaneously
Staple Scale
43
– final score is obtained by summing the ratings across all the items
Summative Scale
44
involves the collection of a variety of different statements about a phenomenon which are ranked by an expert panel in order to develop the questionnaire - allows multiple answers
Thurstone Scale
45
the respondent must choose between two or more equally socially acceptable options
Ipsative Scale
46
the test should be tried out on people who are similar in critical respects to the people for whom the test was designed - An informal rule of thumb should be no fewer than 5 and preferably as many as 10 for each item (the more, the better)
III. Test Tryout
47
Risk of using few subjects = ______
phantom factors emerge
48
A good test item is one that answered _________ by high scorers as a whole
correctly
49
administering a large pool of test items to a sample of individuals who are known to differ on the construct being measured
Empirical Criterion Keying
50
statistical procedure used to analyze items, evaluate test items
Item Analysis
51
employed to examine correlation between each item and the total score of the test
Discriminability Analysis
52
suggest a sample of behavior of an individual
Item
53
a blueprint of the test in terms of number of items per difficulty, topic importance, or taxonomy
Table of Specification:
54
Define clearly what to measure, generate item pool, avoid long items, keep the level of reading difficulty appropriate for those who will complete the test, avoid double-barreled items, consider making positive and negative worded items
Guidelines for Item writing
55
items that convey more than one ideas at the same time
Double-Barreled Items
56
defined by the number of people who get a particular item correct
Item Difficulty
57
calculating the proportion of the total number of testtakers who answered the item correctly; The larger, the easier the item
Item-Difficulty Index
58
__________ for personality testing, percentage of individual who endorsed an item in a personality test
Item-Endorsement Index
59
The optimal average item difficulty is approx. _________ with items on the testing ranging in difficulty from about 30% to 80%
50%
60
Omnibus Spiral Format
items in an ability are arranged into increasing difficulty
61
provides an indication of the internal consistency of a test
Item-Reliability Index
62
The higher ___________, the greater the test’s internal consistency
Item-Reliability index
63
designed to provide an indication of the degree to which a test is measure what it purports to measure
Item-Validity Index:
64
The higher Item-Validity index, the greater the test’s _________
criterion-related validity
65
measure of item discrimination; measure of the difference between the proportion of high scorers answering an item correctly and the proportion of low scorers answering the item correctly
Item-Discrimination Index
66
compares people who have done well with those who have done poorly
Extreme Group Method:
67
difference between these proportion
Discrimination Index:
68
correlation between a dichotomous variable and continuous variable
Point-Biserial Method
69
graphic representation of item difficulty and discrimination
Item-Characteristic Curve
70
one that eluded any universally accepted solutions
Guessing
71
testtaker obtains a measure of the level of the trait; thus, high scorers may suggest high level in the trait being measured
Cumulative Model
72
testtaker response earn credit toward placement in a particular class or category with other testtaker whose pattern of responses is similar in some way
Class Scoring/Category Scoring –
73
compares testtaker’s score on one scale within a test to another scale within that same test, two unrelated constructs
Ipsative Scoring
74
characterize each item according to its strength and weaknesses - As revision proceeds, the advantage of writing a large item pool becomes more apparent because some items were removed and must be replaced by the items in the item pool
IV. Test Revision
75
revalidation of a test on a sample of testtakers other than those on who test performance was originally found to be a valid predictor of some criterion; often results to validity shrinkage
Cross-Validation
76
decrease in item validities that inevitably occurs after cross-validation
Validity Shrinkage
77
conducted on two or more test using the same sample of testtakers
Co-validation:
78
creation of norms or the revision of existing norms
Co-norming
79
test protocol scored by highly authoritative scorer that is designed as a model for scoring and a mechanism for resolving scoring discrepancies
Anchor Protocol:
80
discrepancy between scoring in an anchor protocol and the scoring of another protocol
Scoring Drift
81
item functions differently in one group of testtakers known to have the same level of the underlying trait
Differential Item Functioning
82
test developers scrutinize group by group item response curves looking for DIF Items
DIF Analysis
83
items that respondents from different groups at the same level of underlying trait have different probabilities of endorsing a function of their group membership
DIF Items:
84
refers to an interactive, computer administered test-taking process wherein items presented to the testtaker are based in part on the testtaker’s performance on previous items
Computerized Adaptive Testing
85
The test administered may be different for each testtaker, depending on the test performance on the items presented  Reduces floor and ceiling effect
Computerized Adaptive Testing
86
occurs when there is some lower limit on a survey or questionnaire and a large percentage of respondents score near this lower limit (testtakers have low scores)
Floor Effects:
87
occurs when there is some upper limit on a survey or questionnaire and a large percentage of respondents score near this upper limit (testtakers have high scores)
Ceiling Effects:
88
ability of the computer to tailor the content and order of presentation of items on the basis of responses to previous items
Item Branching
89
subtest used to direct or route the testtaker to a suitable level of items
Routing Test
90
setting cut scores that entails a histographic representation of items and expert judgments regarding item effectiveness
Item-Mapping Method:
91
– the level of which a the minimum criterion number of correct responses is obtained
Basal Level
92
– standardized test administration is assured for testtakers and variation is kept to a minimum Test content and length is tailored according to the taker’s ability
Computer Assisted Psychological Assessment