Final Exam Flashcards

1
Q

Patterns of responding to scale items that result in false or misleading information

A

Response bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When an individual agrees (or disagrees) with a statement without regard for the meaning of those statements

A

Acquiescence bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Agreeing to all items regardless of content

A

Yea-saying

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Disagreeing to all items regardless of content

A

Nay-saying

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Tendency to avoid or endorse extreme response options

A

Extreme/Moderate responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The tendency for a person to respond in a way that seems socially appealing, regardless of his or her true characteristics.

A

Social desirability bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

: test takers intentionally attempt to appear socially desirable

A

Impression management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

test takers intentionally underreport negative aspects of themselves, or have unrealistically positive views of themselves. trait like

A

Self-deception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Respondents are motivated to appear more cognitively impaired, emotionally distressed, physically challenged, or psychologically disturbed than they actually are.

A

Malingering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Carelessness or lack of motivation to respond meaningfully. Likert-type items

A

Random Responding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Some respondents may be “luckier” than others and answer items correctly
Correct/Incorrect items

A

Guessing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

May increase accurate responding

May also increase random responding

A

Anonymity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Pairs or sets of items that are equally socially desirable

A

Forced choice assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Instruments used to collect important information from individuals

A

Surveys

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Using statistics of a sample to ensure it is representative of a population

A

Probability sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

is the body of knowledge or behaviors that the test represents

A

Testing universe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

the group of individuals who will take the test

A

Target audience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

the information that the test will provide to the test user

A

Purpose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

determine whether students have the skills or knowledge necessary to understand new material
determine how much information students already know about the new material
Decisions made at the beginning of instruction

A

Placement assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Assessments that help teachers determine what information students are and are not learning during the instructional process.
Decisions made during instruction

A

Formative assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Assessments that involves an in-depth evaluation of an individual to identify characteristics for treatment or enhancement. Decisions made during instruction

A

Diagnostic assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Determine what students do and do not know
Gauge student learning
Assign earned grades

Often the same tests are used for formative and summative assessment
Decisions made after instruction

A

Summative assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Collections of an individual’s work, to highlight and assess that part of student learning and performance, which mat be difficult to assess with standardized testing

A

Portfolios

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When a student’s test performance significantly affects educational paths or choices.

A

High-stakes tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Measure understanding rather than application
Too structured
Typically only true/false, multiple choice questions

A

traditional assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Measures a students ability to apply in real-world settings the knowledge and skills he or she has learned

A

Authentic assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Teaching to the test is _______ for traditional assessment, but ______ for authentic assessment

A

Discouraged, encouraged

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Treatment methods with documented research evidence that the methods are effective for solving the problem being addressed

A

Evidence-based treatment methods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences

A

Evidence-based practice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

One of the strongest, and consistent predictor of performance

Moderated by job complexity

A

Cognitive Ability/General Mental Ability testing

31
Q

Underlying concepts of constructs that the tests or groups of test questions are measuring

A

Factors

32
Q

An advanced statistical procedure based on the concept of correlation that helps investigators identify the underlying constructs or factors being measured

A

Factor analysis

33
Q

No formal hypothesis about the factors

A

Exploratory factor analysis

34
Q

Factor structure specified in advance based on theory

A

Confirmatory factor analysis

35
Q

Factor Analysis Limitations

A

Relies on a linear approximation

Not well suited for categorical or binary data responses

36
Q

A theory that relates the performance of each item to a statistical estimate of the test taker’s ability on the construct being measured

A

Item Response Theory

37
Q

Item Response Theory Limitations

A

Heavily reliant on very large sample sizes

Often not feasible in organizational settings

38
Q

A good psychological test

A

Representative sample of behaviors

Standardized testing conditions

Scoring rules

39
Q

Statements by professionals regarding what they believe are appropriate and inappropriate behaviors when practicing the profession

A

Ethical standards

40
Q

APA Ethical Principles

A

Beneficence and Nonmaleficence, Fidelity and Responsibility, Integrity, Justice, Respect for People’s Rights and Dignity

41
Q

Estimates a persons standing on underlying trait measured, rather than their score on the test

A

Item Response Theory

42
Q

assigning numbers with specific rules to phenomena

A

Measurement

43
Q

Most basic level of measurement

Data “in name only”

Numbers assigned to categories to give them labels

A

Nominal Scale

44
Q

Same properties as the nominal scale, adds order

Doesn’t tell us distance between the candidates

A

Ordinal Scale

45
Q

Distance from one point to another is the same

A

Interval Scale

46
Q

Adds an absolute zero point, representing the complete absence of the property measured

A

Ratio Scale

47
Q

Shows the observed distribution of scores

A

Histogram

48
Q

Where distributions are centered

A

central tendency

49
Q

How spread out (distributed) are groups of scores

A

Variability

50
Q

A group of test scores achieved by some group of individuals

A

Norms

51
Q

X = T + E

A
X = observed score
T = true score
E = random error
52
Q

Average score obtained if an individual took a test an infinite number of times
Can never truly be known

Random errors cancel each other out over an infinite number of times

A

True score

53
Q

Difference between the true score and the observed score

Over an infinite number of testing occasions, ____ error will be zero
Also reduced by adding more items
Normally distributed

A

Random error

54
Q

Single source of error which always increases or decreases the true score by the same amount
Hard to predict

Practice effects and order effects are ______ error

Does not reduce reliability

A

Systematic error

55
Q

Evidence that the interpretations that are being made from the scores on a test are appropriate for their intended purpose

A

Validity

56
Q

The extent to which the questions on a test are representative of the material that should be covered by the test

A

Content validity

57
Q

An attribute, trait, or other characteristic that is abstracted from observable behavior

A

Construct

58
Q

The behavior we want to predict

A

criterion

59
Q

The extent to which the scores on a test correlate with scores on a measure of performance or behavior

A

Criterion-related validity

60
Q

Evidence that a test relates to other tests and behaviors as predicted by theory

A

Construct validity

61
Q

Perceptions of the test takers that the test measures what it intended to measure

A

Face Validity

62
Q

A method of defining a construct by identifying its relationships with as many other constructs as possible

A

Nomological network

63
Q

two measures labeled with same construct but uncorrelated

A

Jingle

64
Q

two measures labeled with different construct but correlated

A

Jangle

65
Q

The measure(s) of performance (or some other outcome) that we expect to correlate with test scores

A

Criterion

66
Q

The extent to which scores on a test correlate with scores on a measure of performance or behavior

A

Criterion-related validity

67
Q

Assess job applicants on the predictor

A

Predictive method

68
Q

Assess current employees on both predictors and criteria, or examine previously existing data

A

Concurrent method

69
Q

Correlation between test scores (predictors) and performance (criterion) representing the strength of the validity evidence

A

Validity coefficient (r)

70
Q

The amount of shared variance between predictor and criterion

A

Coefficient of determination (R2)

71
Q

The process of administering a test to another sample of test takers, representative of the target population

A

Cross-validation

72
Q

A reduction in the correlation between predictors and criteria due to random error when comparing the first and second administration of the test

A

Shrinkage

73
Q

Scores on a test taken by different subgroups in the population (e.g., men and women; minority and majority) need to be interpreted different because of some characteristic of the test not related to the construct being measured

A

Measurement bias