Methods and Statistics used in Research Studies and Test Construction Flashcards
an umbrella term for all that goes into the process of creating a test
Test Development
brainstorming of ideas about what kind of test a developer wants to publish
- stage wherein the ff. is determined: construct, goal, user, taker, administration, format, response, benefits, costs, interpretation
- determines whether the test would be norm-referenced or criterion-referenced
I. Test Conceptualization
preliminary research surrounding the creation of a prototype of the test
Pilot Work/Pilot Study/Pilot Research
stage in the process that entails writing test items, revisions, formatting, setting scoring rules
- it is not good to create an item that contains numerous ideas
II. Test Construction
reservoir or well from which the items will or will not be drawn for the final version of the test
Item Pool
relatively large and easily accessible collection of test questions
Item Banks
refers to an interactive, computer administered test-taking process wherein items presented to the testtaker are based in part on the testtaker’s performance on previous items
Computerized Adaptive Testing:
occurs when there is some lower limit on a survey or questionnaire and a large percentage of respondents score near this lower limit (testtakers have low scores)
Floor Effects
occurs when there is some upper limit on a survey or questionnaire and a large percentage of respondents score near this upper limit (testtakers have high scores)
Ceiling Effects
ability of the computer to tailor the content and order of presentation of items on the basis of responses to previous items
Item Branching
form, plan, structure, arrangement, and layout of individual test items
Item Format
offers two alternatives for each item
Dichotomous Format
each item has more than two alternatives
Polychotomous Format
a format where respondents are asked to rate a construct
Category Format
subject receives a longlist of adjectives and indicates whether each one if characteristic of himself or herself
Checklist
items are arranged from weaker to stronger expressions of attitude, belief, or feelings
Guttman Scale
require testtakers to select response from a set of alternative responses
Selected-Response Format
Has three elements: stem (question), a correct option, and several incorrect alternatives (distractors or foils), Should’ve one correct answer, has grammatically parallel alternatives, similar length, alternatives that fit grammatically with the stem, avoid ridiculous distractors, not excessively long, “all of the above”, “none of the above” (25%)
Multiple Choice
a distractor that was chosen equally by both high and low performing groups that enhances the consistency of test results
Effective Distractors
may hurt the reliability of the test because they are time consuming to read and can limit the no. of good items
Ineffective Distractors
less likely to be chosen, may affect the reliability of the test bec the testtakers may guess from the remaining options
Cute Distractors
Test taker is presented with two columns: Premises and Responses
Matching Item
Usually takes the form of a sentence that requires the testtaker to indicate whether the statement is or is not a fact (50%)
Binary Choice
requires testtakers to supply or to create the correct answer, not merely selecting it
Constructed-Response Format
Requires the examinee to provide a word or phrase that completes a sentence
Completion Item
Should be written clearly enough that the testtaker can respond succinctly, with short answer
Short-Answer
allows creative integration and expression of the material
Essay
process of setting rules for assigning numbers in measurement
Scaling
- involve classification or categorization based on one or more distinguishing characteristics
- Label and categorize observations but do not make any quantitative distinctions between observations
- mode
Nominal
rank ordering on some characteristics is also permissible
-median
Ordinal
contains equal intervals, has no absolute zero point (even negative values have interpretation to it)
Ratio
- has true zero point (if the score is zero, it means none/null)
Easiest to manipulate
Interval
- produces ordinal data by presenting with pairs of two stimuli which they are asked to compare
- respondent is presented with two objects at a time and asked to select one object according to some criterion
Paired Comparison
– respondents are presented with several items simultaneously and asked to rank them in order or priority
Rank Order
– respondents are asked to allocate a constant sum of units, such as points, among set of stimulus objects with respect to some criterion
Constant Sum
– sort object based on similarity with respect to some criterion
Q-Sort Technique
– rate the objects by placing a mark at the appropriate position on a continuous line that runs from one extreme of the criterion variable to the other
- e.g., Rating Guardians of the Galaxy as the best Marvel Movie of Phase 4
Continuous Rating
– having numbers or brief descriptions associated with each category
- e.g., 1 if your like the item the most, 2 if so-so, 3 if you hate it
Itemized Rating
– indicate their own attitudes by checking how strongly they agree or disagree with carefully worded statements that range from very positive to very negative towards attitudinal object
- principle of measuring attitudes by asking people to respond to a series of statements about a topic, in terms of the extent to which they agree with them
Likert Scale
– a 100-mm line that allows subjects to express the magnitude of an experience or belief
Visual Analogue Scale
– derive respondent’s attitude towards the given object by asking him to select an appropriate position on a scale between two bipolar opposites
Semantic Differential Scale
– developed to measure the direction and intensity of an attitude simultaneously
Staple Scale
– final score is obtained by summing the ratings across all the items
Summative Scale
involves the collection of a variety of different statements about a phenomenon which are ranked by an expert panel in order to develop the questionnaire
- allows multiple answers
Thurstone Scale
the respondent must choose between two or more equally socially acceptable options
Ipsative Scale
the test should be tried out on people who are similar in critical respects to the people for whom the test was designed
- An informal rule of thumb should be no fewer than 5 and preferably as many as 10 for each item (the more, the better)
III. Test Tryout
Risk of using few subjects = ______
phantom factors emerge
A good test item is one that answered _________ by high scorers as a whole
correctly
administering a large pool of test items to a sample of individuals who are known to differ on the construct being measured
Empirical Criterion Keying
statistical procedure used to analyze items, evaluate test items
Item Analysis
employed to examine correlation between each item and the total score of the test
Discriminability Analysis
suggest a sample of behavior of an individual
Item
a blueprint of the test in terms of number of items per difficulty, topic importance, or taxonomy
Table of Specification:
Define clearly what to measure, generate item pool, avoid long items, keep the level of reading difficulty appropriate for those who will complete the test, avoid double-barreled items, consider making positive and negative worded items
Guidelines for Item writing
items that convey more than one ideas at the same time
Double-Barreled Items
defined by the number of people who get a particular item correct
Item Difficulty
calculating the proportion of the total number of testtakers who answered the item correctly; The larger, the easier the item
Item-Difficulty Index
__________ for personality testing, percentage of individual who endorsed an item in a personality test
Item-Endorsement Index
The optimal average item difficulty is approx. _________ with items on the testing ranging in difficulty from about 30% to 80%
50%
Omnibus Spiral Format
items in an ability are arranged into increasing difficulty
provides an indication of the internal consistency of a test
Item-Reliability Index
The higher ___________, the greater the test’s internal consistency
Item-Reliability index
designed to provide an indication of the degree to which a test is measure what it purports to measure
Item-Validity Index:
The higher Item-Validity index, the greater the test’s _________
criterion-related validity
measure of item discrimination; measure of the difference between the proportion of high scorers answering an item correctly and the proportion of low scorers answering the item correctly
Item-Discrimination Index
compares people who have done well with those who have done poorly
Extreme Group Method:
difference between these proportion
Discrimination Index:
correlation between a dichotomous variable and continuous variable
Point-Biserial Method
graphic representation of item difficulty and discrimination
Item-Characteristic Curve
one that eluded any universally accepted solutions
Guessing
testtaker obtains a measure of the level of the trait; thus, high scorers may suggest high level in the trait being measured
Cumulative Model
testtaker response earn credit toward placement in a particular class or category with other testtaker whose pattern of responses is similar in some way
Class Scoring/Category Scoring –
compares testtaker’s score on one scale within a test to another scale within that same test, two unrelated constructs
Ipsative Scoring
characterize each item according to its strength and weaknesses
- As revision proceeds, the advantage of writing a large item pool becomes more apparent because some items were removed and must be replaced by the items in the item pool
IV. Test Revision
revalidation of a test on a sample of testtakers other than those on who test performance was originally found to be a valid predictor of some criterion; often results to validity shrinkage
Cross-Validation
decrease in item validities that inevitably occurs after cross-validation
Validity Shrinkage
conducted on two or more test using the same sample of testtakers
Co-validation:
creation of norms or the revision of existing norms
Co-norming
test protocol scored by highly authoritative scorer that is designed as a model for scoring and a mechanism for resolving scoring discrepancies
Anchor Protocol:
discrepancy between scoring in an anchor protocol and the scoring of another protocol
Scoring Drift
item functions differently in one group of testtakers known to have the same level of the underlying trait
Differential Item Functioning
test developers scrutinize group by group item response curves looking for DIF Items
DIF Analysis
items that respondents from different groups at the same level of underlying trait have different probabilities of endorsing a function of their group membership
DIF Items:
refers to an interactive, computer administered test-taking process wherein items presented to the testtaker are based in part on the testtaker’s performance on previous items
Computerized Adaptive Testing
The test administered may be different for each testtaker, depending on the test performance on the items presented
Reduces floor and ceiling effect
Computerized Adaptive Testing
occurs when there is some lower limit on a survey or questionnaire and a large percentage of respondents score near this lower limit (testtakers have low scores)
Floor Effects:
occurs when there is some upper limit on a survey or questionnaire and a large percentage of respondents score near this upper limit (testtakers have high scores)
Ceiling Effects:
ability of the computer to tailor the content and order of presentation of items on the basis of responses to previous items
Item Branching
subtest used to direct or route the testtaker to a suitable level of items
Routing Test
setting cut scores that entails a histographic representation of items and expert judgments regarding item effectiveness
Item-Mapping Method:
– the level of which a the minimum criterion number of correct responses is obtained
Basal Level
– standardized test administration is assured for testtakers and variation is kept to a minimum
Test content and length is tailored according to the taker’s ability
Computer Assisted Psychological Assessment