Ch 5 Flashcards

1
Q

Alternate forms reliability

A

Assessment of reliability by administering two different forms of the same measure to the same individuals at two points in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Construct validity

A

The degree to which a measurement device accurately measures the theoretical construct it is designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Content validity

A

extent to which a measure represents all facets of a given construct. For example, a depression scale may lack content validity if it only assesses the affective dimension of depression but fails to take into account the behavioral dimension

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Convergent validity

A

The construct validity of a measure is assessed by examining the extent to which scores on the measure are related to scores on other measure of the same construct or similar constructs

say you were researching depression in college students. In order to measure depression (the construct), you use two measurements: a survey and participant observation. If the scores from your two measurements are close enough (i.e. they converge), this demonstrates that they are measuring the same construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cronbachs alpha

A

An indicator of internal consistency reliability assessed by examining the average correlation of each identity (question) in a measure with every other question

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Discriminate validity

A

The construct validity of a measure is assessed by examining the extent to which scores on the measure are not related to scores on conceptually unrelated measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Face validity

A

The degree to which a measurement device appears to accurately measure a variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Internal consistency reliability

A

Reliability assessed with data correlated at one point in time with multiple measures of a psychological construct

A measure is reliable when the multiple measures provide similar results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Interrater reliability

A

An inficator of reliability that examines the agreement of observations made by two or more raters (judges)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Interval scale

A

A scale of measurement in which the intervals between numbers in the scale are all equal in size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Idem total correlation

A

The correlation between scores on individual items with the total score on all items of a measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Measurement error

A

The degree to which a measurement deviates from the true score value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Nominal scale

A

A scale of measurement with two or more categories that have no numerical (less than, greater than) properties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ordinal scale

A

A scale of measurement in which the measurement categories form a rank order along a continuum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pearson product moment correlation coefficient

A

A type of correlation coefficient used with interval and ratio scale date. In addition to providing information on the strength of relationship between two variables, it indicates the direction (positive or negative) of the relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Predictive validity

A

The construct validity of a measure is assessed by examining the ability of the measure to predict a future behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Ratio scale

A

A scale of measurement in which there is an absolute zero point, indicating an absence of the variable being measured. An implementation is that ratios of numbers on the scale can be formed (generally, these are physical measures such as weight or times measures such as duration or reaction time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Reactivity

A

A problem of measurement in which the measure changes the behavior being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Reliability

A

The degree to which a measure is consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Split half reliability

A

A reliability coefficient determined by the correlation between scores on half of the items on a measure with scores on the other half of a measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Test retest reliability

A

A reliability coefficient determined by the correlation between scores on a measure given at one time with scores on the same measure given at a later time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

True score

A

An individuals actual score on a variable being measured l, as opposed to the score the individual obtained on the measure itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How do you measure reliability

A

Through true score and measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

When is reliability most likely achieved

A

When researchers use careful measurement procedures (like through training)

Making multiple measures (ex-on a personality test; it will have 10 or more questions designed to access a trait)
-Reliability is increased when number of ideas increase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How to asses reliability

A

Use Pearson product moment correlation coefficient to calculate correlation coefficients

26
Q

The closer a correlation is to +1 or -1, the ______

A

Stronger the relationship.

27
Q

Using the Pearson correlation coeffienct, a measure is reliability when….

A

Two scores are very similar

28
Q

What is reliability correlation called when using Pearson product moment correlation coefficient ?

A

Reliability coefficient

29
Q

What would it be an example of if a test of intelligence was measured to a group of people one day and again a week later

A

Test retest reliability

We can use correlation coefficients showing that two scores are similar

30
Q

I hat should reliability coefficient be if it is reliable

A

At least .80

31
Q

What is a drawback from test retest

A

Correlation might be artificially high because individuals remember how they respond the first time

32
Q

How to avoid problem of test retest correlation being artificially high

A

Alternate forms reliability

33
Q

Drawbacks of alternate forms reliability

A

Creating a second equivalent measure may require considerable time and effort

34
Q

Psychological measures are made up of a number of different questions called…

A

Items

35
Q

An indicator of internal consistency

A

Split half reliability

36
Q

Term used to correlate reliability in split half reliability

A

Spearman brown split half reliability coefficient

37
Q

Two ways to measure internal consistency reliability

A

Split half reliability

Cronbachs alpha

38
Q

How to perform cronbachs alpha

A

Scores on each idem are correlated with scores on every other item.

A large number of correlation coefficients are produced

Average all these correlation coefficients

39
Q

Why are item total scores informative

A

They provide info about each individual item

Items that do not correlate with the total score on the measure are actually measuring a different variable. They can be eliminated to increase internal consistency reliability

40
Q

When is it useful to use item total correlations

A

When it’s necessary to construct a brief version of a measure.

Even though reliability increases with longer measures, a shorter version can be more convenient to administer and still retain acceptable reliability

41
Q

Commonly used indicator of interrater reliability

A

Cohens kappa

42
Q

Interrater reliability is used when

A

Making observations on people’s behavior to see if everyone agrees

43
Q

Problem with reliability

A

Although it tells us about measurement error, it does not tell us about whether we have a good measure of the variable of interest

44
Q

What refers to the adequacy if operational definition of variables

(To what extent does the operational definition of a variable actually reflect the true theoretical meaning of the variable?)

A

Construct validity

45
Q

Concurrent validity

A

extent to which the results of a particular test or measurement correspond to those of a previously established measurement for the same construct.

46
Q

Farinheight scale would be an example of

A

Interval measurement

47
Q

Likert scale

A

A rating scale often found in survey totems that measures how someone feels about something

Strongly agree to strongly disagree

48
Q

Likert scale is an example of an

A

Interval scale

49
Q

Internal and external validity relects..

A

Weather or not results of a study are trustworthy or meaningful

50
Q

Ratio scale

A

There’s an absolute 0

Ex-100 is twice more than 50 because 0 dollars is flat broke

Scores on test (when one can miss answers)

Reaction times

Physical measurements

51
Q

Gender and undergraduate major are examples of …

A

Nominal (categorical)

52
Q

Grades and level of education are examples of

A

Ordinal scales

53
Q

On the discuss scale..comparing level of disgust with other personality characteristics would be an example of

A

Convergent validity

54
Q

Filler idems

A

Put on test and surveys that are not calculated into results because they don’t deal with what is actually being measured

55
Q

Disgust scale predicting differ by fears is an example of

A

Predictive validity

56
Q

What Belmont principle may be an issue with naturalistic observation

A

Informed consent

57
Q

How to help lessen demand characteristics

A

Filler idems

58
Q

Advantages of repeated measures/within subject design

A

Greater statistical sensitivity by reducing random error

Fewer participants needed

59
Q

Disadvantages of repeated measure design

A

Order effects

60
Q

How to alleviate order effects

A

Counterbalancing