New - Ch 5 (NoteLM) Flashcards

1
Q

Discriminant Validity

A

The extent to which a measure does not correlate with measures of different constructs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Convergent Validity

A

The extent to which a measure correlates with other measures of the same or similar constructs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Criterion Validity

A

The extent to which a measure is related to an outcome or behavior that it should be related to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Content Validity

A

the extent to which a measure covers all aspects of the construct it is intended to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Face Validity

A

The extent to which a measure appears, on the surface, to measure what it is intended to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Validity

A

The extent to which a measure accurately assesses the construct it is intended to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Correlation Coefficient (r)

A

A statistical measure that quantifies the strength and direction of the linear relationship between two variables.

Ranges from -1 to +1, where -1 indicates a perfect negative correlation, +1 indicates a perfect positive correlation, and 0 indicates no linear correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Kappa Coefficient

A

Measures inter-rater reliability or agreement between two raters for categorical variables.

Ranges from -1 to +1, where 1 indicates perfect agreement, 0 indicates agreement equivalent to chance, and negative values suggest less agreement than expected by chance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Internal Reliability

A

Consistency of responses across multiple items within a measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Interrater Reliability

A

Consistency of scores obtained by different observers rating the same behavior or event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test-Retest Reliability

A

Consistency of scores on a measure across multiple administrations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Measurement Error

A

The difference between the observed score and the true score, caused by factors that distort the measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

True Score

A

A hypothetical score that represents a participant’s actual standing on a construct, without any measurement error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Observed Score

A

The score obtained on a measure, which includes both the true score and measurement error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reliability

A

The consistency of a measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ratio Scale

A

A measurement scale where data is ordered, intervals between values are equal, and there is a true zero point.

17
Q

Interval Scale

A

A measurement scale where data is ordered, intervals between values are equal, but there is no true zero point.

18
Q

Ordinal Scale

A

A measurement scale that ranks data in order, but the intervals between the ranks are not necessarily equal.

19
Q

Nominal Scale

A

A measurement scale that categorizes data into distinct groups with no inherent order.

20
Q

Physiological Measure

A

A method of data collection that involves recording biological data, such as heart rate, brain activity, or hormone levels.

21
Q

Observational Measure

A

A method of data collection where researchers directly observe and record participants’ behavior.

22
Q

Self-Report Measure

A

A method of data collection where participants report on their own thoughts, feelings, or behaviors.

23
Q

Operational Definition

A

A specific description of how a concept will be measured or manipulated in a study.

24
Q

Conceptual Definition:

A

A theoretical explanation of a concept.

25
Differentiate between conceptual definitions and operational definitions in research.
Conceptual definitions describe the theoretical meaning of a concept, while operational definitions specify how the concept is measured or manipulated in a study.
26
List and briefly describe the three main types of measures used in psychology.
The three main types of measures are: (1) Self-report, where participants provide information about themselves; (2) Observational, where researchers directly observe and record behavior; and (3) Physiological, where biological processes are measured.
27
What are the four scales of measurement, and how do they differ?
The four scales are: (1) Nominal, which categorizes data without order; (2) Ordinal, which ranks data in order but without equal intervals; (3) Interval, which has ordered categories with equal intervals but no true zero; and (4) Ratio, which has equal intervals and a true zero point.
28
Explain the concept of reliability in the context of psychological measurement.
Reliability refers to the consistency of a measure. A reliable measure produces similar results under consistent conditions, indicating low measurement error.
29
Describe the relationship between observed score, true score, and measurement error.
Observed score is the actual score obtained on a measure, which is influenced by the true score (the hypothetical error-free score) and measurement error (factors that distort the observed score from the true score).
30
Compare and contrast test-retest reliability and internal reliability.
Test-retest reliability assesses consistency of scores over time, while internal reliability examines consistency of responses across multiple items within a measure.
31
How is a correlation coefficient used to assess reliability?
A correlation coefficient (r) quantifies the strength and direction of the relationship between two variables, reflecting reliability. A high positive correlation indicates strong reliability.
32
Define validity in terms of psychological measurement.
Validity refers to whether a measure accurately assesses the construct it is intended to measure.
33
Explain how criterion validity is established for a measure.
Criterion validity is established by demonstrating a measure’s correlation with a relevant behavioral outcome. For example, a valid aptitude test should predict actual job performance.
34
What is the difference between convergent and discriminant validity?
Convergent validity refers to the degree to which a measure correlates with other measures of the same or similar constructs, while discriminant validity refers to the degree to which a measure does not correlate with measures of different constructs.