Chapter 4 Flashcards

1
Q

Observed Score

A

= True Score + Systematic Error + Random Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Coefficient alpha

A

estimate is derived from the correlations of each item with each other item and so does not rest on any arbitrary choice of ways to divide the items into two halves. Preferred measure of internal consistency reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Constructs

A

the abstractions that social scientists discuss in their theories. They are the rich theoretical concepts that make the science interesting, terms such as social status, power, intelligence, and gender roles. Because we cannot literally put a finger on any of these concepts to measure them, we must find some concrete representations that approximate what we mean when we speak of such concepts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Convergent Validity

A

Overlap between alternative measures that are intended to tap the same construct but that have different sources of systematic error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Convergent Validity Coefficients

A

Correlations between scores that reflect the same trait measured by different methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Correlation Coefficient

A

a statistical index of the strength of association between 2 variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Definitional Operationism

A

the assumption that the operational definition is the construct. (Ex: intelligence is what an intelligence test measures).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Discriminant Validity

A

A validity measure has to show good convergence with other measures of the same thing. It should also fail to correlate with measures that are supposed to tap basically different constructs. (That’s basically the definition of discriminant validity).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Discriminant Validity Coefficient

A

Indicates the correlation between different traits measured by the same method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Face Validity

A

evaluated by a group of judges, sometimes experts, who read or look at a measuring technique and decide whether in their opinion it measures what its name suggests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Internal consistency reliability

A

alternative estimate is not subject to these concerns and, therefore, more widely used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Kappa

A

measure of agreement that can be used to estimate inter-rater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Methods

A

mode of measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Multitrait-multimethod matrix

A

table of correlation coefficients that enables us to simultaneously evaluate the convergent and discriminant validity of a construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Nomological Net

A

the theoretical network of construct-to-construct associations derived from relevant theory and stated at an abstract level (Look at pg. 78 – 79 in book)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Nonsense Coefficient

A

Reflects the correlation between different traits measured by different methods.

17
Q

Operational Definition

A

specifies how to measure a variable so that we can assign someone a score such as high, medium, or low social power.

18
Q

Random Error

A

reflects nonsystematic, ever-changing influences on the score

19
Q

Reliability

A

the extent to which a measure is free from random error.

20
Q

Reliability Coefficients

A

correlations between scores that reflect the same trait and the same method. Although these coefficients are not themselves indicative of validity, they indicate the limit to the validity of our measure.

21
Q

Social Desirability Response Bias

A

general tendency to over report one’s desirable behaviors and other characteristics and to under report one’s less admirable qualities.

22
Q

Split-half reliability

A

set of items in the measure is split into two halves, and the split is usually done by separating the full set of items into odd-numbered and even-numbered sets; a strategy that ensures an equivalent number of items from early and late in the measure appear in the two sets.

23
Q

Systematic Error

A

reflects influences from other constructs besides the desired one

24
Q

Test – ReTest Correlation

A

provides an estimate of the measure’s reliability. (Ex: Correlation of several tests averaged together).

25
Q

Traits

A

the underlying construct the measurement is supposed to tap (Ex: attitudes towards women).

26
Q

True Score

A

function of the construct we are attempting to measure

27
Q

Validity

A

the extent to which a measure reflects only the desired construct without contamination from other systematically varying constructs. (Note: Validity requires reliability as a pre-requisite).

28
Q

Variables

A

representations of constructs. They cannot be synonymous with a construct because any single construct has many different variables. Therefore, variables are partial, fallible representations of constructs, and we work with them because they are measurable. They suggest ways in which we can decide whether someone has more or less of the construct.