PSych Flashcards

1
Q

What are norms?

A

Average scores collected from a large group (the norm group)

Norms help interpret an individual’s raw score by comparing it to others’ scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does a norm-referenced test interpret scores?

A

Converts raw scores into relative rankings using percentiles, z-scores, or standard scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a percentile rank?

A

Shows the percentage of people in the norm group who scored lower than the individual.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Z-score?

A

Measures how far a score is from the mean in standard deviation (SD) units.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does an IQ score of 130 indicate?

A

Top 2% of the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are developmental norms?

A

Average ages or milestones at which children typically achieve certain skills, behaviors, or abilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define an ordinal scale.

A

Data is ranked in order, but differences between ranks may not be equal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are within-group norms?

A

Compares an individual’s performance within a specific group (e.g., same age, gender, education level).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a normative sample?

A

A group used to establish average performance (norms) on a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Differentiate between standardization sample and normative sample.

A

Standardization sample helps develop the test; normative sample establishes norms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are convenience norms?

A

Norms created based on easily available data, which may introduce bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference between percentile and percentage?

A

Percentile ranks a value in a dataset; percentage represents a proportion of the total.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a ceiling test?

A

Measures the upper limits of ability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a floor test?

A

Measures the lower limits of ability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does a standard score (Z-score) indicate?

A

How far a data point is from the mean in terms of standard deviations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a linear transformation?

A

Changes the scale but keeps relationships the same.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the purpose of equating procedures?

A

Ensures fair comparisons of test scores across different test versions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is alternate forms equating?

A

Compares two different versions of the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does reliability refer to?

A

The consistency and stability of a measurement instrument over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is measurement error?

A

The difference between a person’s true score and their observed score on a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is inter-rater reliability?

A

Measures agreement between multiple raters or judges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Define split-half reliability.

A

Assesses internal consistency by comparing two halves of a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does Cronbach’s Alpha measure?

A

How closely related a set of items are within a test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the Standard Error of Measurement (SEM)?

A

Estimates the variability in an individual’s test score due to measurement error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is confirmatory factor analysis (CFA)?
Used to confirm if data fits a hypothesized measurement structure.
26
What is differential item functioning (DIF)?
Identifies biased test items affecting different groups.
27
What is the significance of statistical significance?
Ensures results are not due to random chance.
28
What does DIF stand for in the context of testing?
Differential Item Functioning ## Footnote DIF identifies biased test items affecting different groups.
29
What is the primary purpose of Item Response Theory (IRT)?
Analyzes individual item responses to ensure fairness.
30
When should Maximum Likelihood (ML) be used in statistical modeling?
For normal distributions with ≥5 levels.
31
What statistical method is recommended for non-normal data with ≥5 levels?
Maximum Likelihood Ratio (MLR).
32
Which method should be used for data with ≤5 levels?
DWLS.
33
What statistical method is suitable for categorical data?
WLSMV.
34
Define reliability in the context of measurement instruments.
Consistency and stability of a measurement instrument over time.
35
What does validity refer to in testing?
Whether the test measures what it is supposed to measure.
36
Can a test be reliable but not valid?
Yes, a test can be reliable but not valid.
37
What is a true score?
The actual ability or characteristic being measured.
38
What does the observed score represent?
The score obtained from a test.
39
What is the Standard Error of Measurement (SEM)?
Estimates the range in which the true score likely falls.
40
What type of reliability measures the stability of test results over time?
Test-Retest Reliability.
41
What is Inter-Rater Reliability?
Ensures consistency in human judgment.
42
What statistic is commonly used to measure internal consistency?
Cronbach’s alpha (α).
43
What is Parallel-Forms Reliability?
Measures consistency between different versions of the same test.
44
What does Split-Half Reliability assess?
Internal consistency by dividing a test into two halves.
45
What is the Kuder-Richardson Formula (KR-20) used for?
Dichotomous test items (e.g., true/false).
46
What is the purpose of confirmatory factor analysis (CFA)?
Verifies whether observed variables align with a hypothesized structure of latent variables.
47
What does measurement invariance ensure?
A measurement tool works equally across different groups.
48
What is the Chi-Square Test (χ²) used for?
To compare observed vs. expected frequencies.
49
What is criterion validity?
Examines how well a test predicts outcomes or correlates with an established standard.
50
What is the difference between concurrent and predictive validity?
Concurrent validity assesses results at the same time, while predictive validity evaluates future performance.
51
What does the term 'predictor' refer to in validation?
Factors or test scores used to forecast future performance.
52
What is diagnostic utility in testing?
How accurately a test classifies people into correct categories.
53
What are true positives (TP) in diagnostic testing?
People who have the condition and are correctly identified by the test.
54
What does the Youden Index measure?
The effectiveness of a diagnostic test.
55
What is qualitative item analysis?
Relies on the judgments of reviewers regarding the substantive characteristics of test items.
56
What does quantitative item analysis use to assess test items?
Statistical procedures based on responses from test samples.
57
What is the first step in test development?
Define the purpose of the test.
58
What is the purpose of a test blueprint?
Outline the test structure and alignment with the test's purpose.
59
What is meant by content validity?
Does the test cover the full scope of the concept being measured?
60
What are advantages of selected-response items?
* Fast * Objective * Easy to score * Time-efficient
61
What are disadvantages of forced-choice items?
Difficult for respondents if both options seem equally applicable.
62
What is the first step after revisions and validation in test development?
Finalize the test for full use.
63
What is the purpose of continuous monitoring and review in test development?
To update the test based on feedback, research, and changes in the field.
64
What are the advantages of selected-response items?
* Fast * Objective * Easy to score * Time-efficient
65
What are the disadvantages of selected-response items?
* Risk of guessing * Careless mistakes * Difficulty capturing detailed insights
66
What do forced-choice items help reveal?
Personality traits by requiring a forced decision.
67
What is a disadvantage of forced-choice items?
Difficult for respondents if both options seem equally applicable.
68
What is an advantage of constructed-response items?
Provides richer, more detailed insights.
69
What is a disadvantage of constructed-response items?
* Requires more time to grade * Needs skilled graders * Can affect score consistency
70
Define item validity.
Measures whether a test item accurately assesses the intended skill or knowledge.
71
Provide an example of item validity.
A grammar question in a math test lacks item validity.
72
Define item discrimination.
Evaluates how well an item differentiates between high- and low-performing test-takers.
73
What does a high discrimination index indicate?
The item distinguishes between strong and weak test-takers.
74
How is item difficulty calculated?
Percentage of test-takers who answered correctly.
75
What is an ideal difficulty level for test items?
A balance between too easy and too hard.
76
What are distractors in multiple-choice questions?
Incorrect answer choices designed to appear plausible to those who don’t know the correct answer.
77
What is the purpose of distractors?
Ensures that test-takers must think critically rather than guess easily.
78
Define item-test regression.
Examines how individual test items relate to the overall test score.
79
What is the ideal characteristic of test items?
Moderately difficult and highly discriminative.
80
What is required for ensuring fairness, validity, and reliability in test development?
A well-structured test involves qualitative and quantitative analysis.
81
What is essential for maintaining the test’s accuracy and relevance over time?
Regular review and monitoring.