Lecture 3 Test Development (Catherine) Flashcards

To Provide the main content covered in Lecture 3 on Test Development

1
Q

What are the 5 stages of Test Development?

A
  1. Test Conceptualisation
  2. Test Construction
  3. Test Try Out
  4. Item Analysis
  5. Test Revision
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the aim of Test Conceptualisation?

A

To establish the reasons for designing or revising a psychological test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the aim of Test Construction?

A

To ascertain how numbers should be assigned to psychological attributes and which scales of measurement should be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the purpose of the Test Try Out phase?

A

To consider to whom the test should be administered and to generate data about its reliability and validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the purpose of the Item Analysis phase?

A
  • To identify the properties of the test’s items & scales.
  • to ascertain which types of analyses need to be performed to understand item validity, reliability, difficulty and discrimination.
  • in short, to identify which are ‘good’ items and which are ‘bad’ items
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the purpose of Test Revision?

A

Determining the items of the test that may need to be revised or discarded.
*establishing whether the test is measuring what it was designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the key stages involved in test conceptualisation?

A
  • Conceiving the need for a test
  • Considering the assumptions for test design
  • Running a Pilot Test
  • Conceptualising a Measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the key considerations when Conceiving the need for a test?

A
  • There is always a question in the mind of the test developer e.g.
  • There ought to be an instrument designed to measure [some psychological construct] in [such & such] a way
  • even if a measure does exist, it may have poor psychometric properties (e.g. low reliability, poor construct validity and suspect content validity)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

It what way do the key questions a test developer poses when designing a test correspond with assumption 1 of test design?

A

Assumption 1 Psychological states & traits exist:

  1. What is the test designed to measure?
  2. Is there a need for this test?
  3. What content will this test cover?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

It what way do the key questions a test developer poses when designing a test correspond with assumption 2 of test design?

A

Assumption 2 Psychological states & traits can be Quantified & Measured

  1. What is the ideal format for this test?
  2. What type of responses will be required of test takers?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

It what way do the key questions a test developer poses when designing a test correspond with assumption 3 of test design?

A

Assumption 3 Test related behaviour predicts non-test related behaviour

  1. Should more than one form of the test be developed?
  2. How will meaning be attributed to scores on this test?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

It what way do the key questions a test developer poses when designing a test correspond with assumption 4 of test design?

A

Assumption 4 Psychological tests have strengths & weaknesses

  1. Who benefits from administration of this test?
  2. Is there any potential for harm as a result of the administration of the test?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the key considerations in the stage of pilot work?

A
  • Pilot work refers to the preliminary work surrounding the creation of a prototype of a test
  • During this phase a test developer attempts to determine how best to measure a targeted construct using a number of strategies: e.g. literature review, experimentation
  • Test items may be pilot studies to evaluate whether they should be included in the final form of the instrument. (creation, revision, deletion of test items)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 4 key considerations when conceptualising a measure to be used in a test?

A
  • What is the test designed to measure?
  • Is there a need for this test?
  • What content will the test cover?
  • What is the ideal format of the test?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the key elements required for test construction?

A

*Identifying appropriate measurement scales (N.O.I.R.)
*Developing a scale appropriate for the test: e.g.
-a summative scale,
-binary rating scale
-faces scale
-Method of paired comparisons
-comparative scaling
Categorical scaling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the key aspects of a summative scale and give examples

A
  • a summative scale is a scale on which a test-taker’s answers to test/scale answers may be summed to create an aggregate score
  • This aggregate score is thought to indicate the strength of the test takers traits attitude or other characteristic
  • A Likert scale is a popular summative scale
  • Test developers often treat summative scales as interval even though they are actually ordinal
17
Q

Describe the key aspects of a Likert (summative) scale

A

A Likert Scale typically provides the test taker with 5 or 7 possible responses along an ‘agree’ to ‘disagree’ continuum
*Easy to construct and are used extensively in psychology because it yields ordinal level data, which approximates interval-level data well enough for data analytic purposes

18
Q

What are the key points of the Method of paired comparisons

A

*Test-taker has to choose one of two options (e.g. a statement, object, picture) on the basis of some rule
*The value (e.g. 1 or 0) of each option in each paired comparison is determined by judges prior to test administration
*test takers must select the behaviour they feel is most justified
(NB Ordinal Data)

19
Q

What are the key points of Comparative Scaling?

A

*a test taker must sort or rank stimuli according to some rule e.g. best to worst, most justifiable to least justifiable
(NB Ordinal Data)

20
Q

What are the key points of Categorical Scaling?

A

Test takers must categorise stimuli according to some rule
e.g. sort the behaviours into good sleep practice and bad sleep practice
(NB can give nominal or ordinal data)

21
Q

What are the key points to consider when Writing Test Items?

A
  • The first step in writing items is to create an item pool or item bank
  • as a general rule it is advisable to have at least twice as many items as you will include in your final draft test/measure
  • the process of test try out, item analysis, and test revision, will knock many of these items out as the test developer strives to maximise reliability & validity
22
Q

What types of item format are there that can be used in writing test items?

A
  1. Constructed response format
    * completion answer
    * short answer
    * essay
  2. Selected response format
    * matching choice
    * true-false
    * multiple-choice
23
Q

What are the key points from Completion items from the constructed response format of test items?

A

Completion items:

  • Test-taker provides a word or phrase that completes a sentence
  • There must be a specific correct answer
  • Poorly designed completion items can lead to scoring problems
24
Q

What are the key points from Short Answer & Essay items from the constructed response format of test items?

A
  • Short Answer items:
  • a word, a sentence, & paragraph all qualify as a short answer
  • If more than 2 paragraphs, then the item is more likely to be an essay item
  • Essay items:
  • Useful when the examiner wants the test-taker to demonstrate in-depth knowledge
  • Allows for creative integration & expression of what has been learned
  • Only a limited range of material can be examined this way
25
Q

What are the key points from Matching Choice items from the Selected Response format of test items?

A

Matching items:

The test taker is presented with premises and responses and must match the correct ones to each other

26
Q

What are the key points from True-false items from the Selected Response format of test items?

A

*True-False items are also known as Binary choice items
*Must contain a single unambiguous idea and be short
Expectancy tables are used in evaluating criterion-related validity
*include: yes-no, agree-disagree, true-false

27
Q

What are the key points from Multiple Choice items from the Selected Response format of test items?

A
  • Multiple choice items have 3 elements:
  • A stem
  • A correct option
  • Multiple incorrect alternative options (distractors, foils)
  • There must be one correct answer and all options must be plausible and grammatically comparable
28
Q

What is used to evaluate criterion-related validity

A

Expectancy tables are used in evaluating criterion-related validity

29
Q

What are the key points from Complex Multiple Choice items from the Selected Response format of test items?

A

Complex multiple-choice formats include:

  • Classification,
  • If-Then Conditions
  • Multiple conditions
  • Oddity
30
Q

What are the key points from Classification in Complex Multiple Choice items from the Selected Response format of test items?

A

The test taker classifies a person, object or condition into one of several categories designed in the stem:
e.g. Jean Piaget is best described as a ….. Psychologist:
a, Clinical, b Development, c Psychometric, d Social (b)

31
Q

What are the key points from If-Then conditions in Complex Multiple Choice items from the Selected Response format of test items?

A

The test taker must decide the consequence of one or more condition being present
e.g. If the true variance of a test increases but the error variance remains constant, which of the following will occur?
a. reliability will increase
b. reliability will decrease
c. observed variance will decrease
d neither reliability nor observed variance will change
(a)

32
Q

What are the key points from Multiple conditions in Complex Multiple Choice items from the Selected Response format of test items?

A

The test taker uses two or more conditions in the statements to draw a conclusion:
Given Mary’s raw score on a test is 60, the test mean is 59, and the standard deviation is 2, what is Mary’s z score?
a. -2.00, b -.50, c .50, d 2.00
(c)

33
Q

What are the key points from Multiple true-false conditions in Complex Multiple Choice items from the Selected Response format of test items?

A

The test taker decides whether one, all, or none of the two or more statements or conditions listed in the stem are correct:

Is it true that (1) Alfred Binet was the father of intelligence testing, & that (2) his first intelligence test was published in 1916?
a. Both 1 & 2
b. 1 but not 2
c. 2 but not 1
d. Neither 1 or 2
(c)
34
Q

What are the key points from Oddity conditions in Complex Multiple Choice items from the Selected Response format of test items?

A

The test taker indicates which option does not belong with the others:

which of the following names does not belong with the others?

a. Alfred Adler
b. Sigmund Freud
c. Carl Jung
d. Carl Rogers
(a) Adler is not technically a personality theorist;Adler also confronted his patients rather than building rapport with them