Chapter 10 - Developing a Test Flashcards

1
Q

What are the 10 steps of the test development process?

A
  1. Define the test universe, target audience, and testing purpose.
  2. Develop a test plan.
  3. Compose the test items.
  4. Write administration instructions.
  5. Conduct pilot tests.
  6. Conduct item analysis.
  7. Revise the test.
  8. Validate the test.
  9. Develop the norms.
  10. Compile test manual.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do they define the test universe?

A
  • The developer prepares a working definition of the construct that the test will measure.
  • May involve a literary review of a psychological construct.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do they define a target audience?

A
  • The developer makes a list of the characteristics of the persons who will take the test.
  • Particularly those characteristics that affect how test takers will respond to each question.
  • May also have to consider the reading level, potential disabilities, or motivations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do they define the test purpose?

A
  • The purpose includes not only what the test will measure, but also how the test users will use the test scores.
  • Could take the normative, ipsative, or criterion approach.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Normative Approach

A
  • Comparing test takers to other test takers.
  • Ex: Who receives the highest score will get the job.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ipsative Approach

A
  • Compre a test taker standing on one trait with their own standing on other traits.
  • Ex: How many pushups someone can do in a minute.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Criterion Approach

A
  • Used to indicate achievement.
  • Ex: Must get at least 50% to pass the exam.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do they create a test plan?

A
  • Specify the characteristics of the test, including an operational definition of the construct and the content to be measured, the format of the questions, and the administration and scoring of the test.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do they define a construct?

A
  • Reviewing the literature about the construct and any available tests.
  • Creating a concise definition of the construct.
  • Includes operationalizing the construct in terms of observable and measurable behaviours.
  • Also provides boundaries for the test / what should be tested and what should be excluded from the content.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you choose the test format?

A
  • Refers to the type of questions that the test will contain.
  • Test formats provide two elements: A stimulus to which the test taker responds and a mechanism for response.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Objective Test Format

A
  • One response that is designated as “correct” or provides evidence of a specific construct.
  • Ex: Multiple choice questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Subjective Test Formats

A
  • Do not have a single response that is designated as “correct”.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Projective Tests

A
  • Subjective test formats because the stimuli for these tests are ambiguous pictures.
  • Ex: Essay questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do we choose how to administer a score a test?

A
  • Important because the plan will influence the format and content of the test items.
  • How will the test be taken, how long, will it be administered in a group or individually, how will the test be scored, and what data can we expect?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The Cumulative Model of Scoring

A
  • Probably the most common method for determining an individual’s final test score.
  • Can yield interval-level data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The Categorical Model of Scoring

A
  • Used to place test takers in a particular group or class.
  • Typically yields interval-level data.
17
Q

The Ipsative Model of Scoring

A
  • Differs from the cumulative and categorical models.
  • The test taker is usually presented with two to four statements in a forced choice format.
  • Each test item will contain statements associated with more than one trait or construct.
  • The computer based on a total number of points each scale in the test received.
18
Q

Pilot Test

A
  • A scientific evaluation of the test’s performance - followed by revisions to determine the final form that the test will take.
19
Q

Test Items

A
  • The stimulus or questions in a test.
20
Q

How do they choose the item format?

A
  • Based on information in the test plan, such as the target audience, method of administration, and requirements for scoring.
21
Q

Examples of Objective Items

A
  • Multiple Choice
  • True or False
  • Forced Choice
22
Q

Examples of Subjective Items

A
  • Essay Questions
  • Interview Questions
  • Projective Techniques
  • Sentence Completion
23
Q

Performance Assessments

A
  • Require test takers to directly demonstrate their skills and abilities to perform a group of complex behaviours and tasks.
  • Complex Item
24
Q

Simulation

A
  • Similar to a performance assessment in that it requires test takers to demonstrate their skills and abilities to perform a complex task.
  • Not performed in the actual environment for safety or cost concerns.
25
Q

Portfolio

A
  • Collection of work products that a person gathers over time to demonstrate his or her skills and abilities in a particular area.
26
Q

Response Sets

A
  • Patterns of responding that result in false or misleading information.
  • These sources of error limit the accuracy and usefulness of test scores.
27
Q

Social Desirability

A
  • Test takers to provide or choose answers that are socially acceptable or that present themselves in a favorable light.
28
Q

Acquiescence

A
  • The tendency to agree with any ideas or behaviours presented.
29
Q

Random Responding

A
  • Responding to items in a random fashion by marking answers without reading or considering them.
30
Q

Faking

A
  • Refers to the inclination of some test takers to try and answer items in a way that will cause a desired outcome or diagnosis.
31
Q

Constucts

A
  • Tools that help us understand human behaviour.
32
Q

Brevity

A
  • Use concise and exact language. Short questions reduce errors.
  • Important when writing questions.