Measure Quality Flashcards

1
Q

What are the 2 things psychologists need to consider about the quality of their measurements?

A

Reliability/Precision = exactness (consistency)
Validity/Accuracy = correctness (truthfulness)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is precision?

A

Exactness (consistency)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is accuracy?

A

Correctness (truthfulness)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Reliability?

A

Precision
- The extent to which our measure would provide the same results under the same conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Validity?

A

Accuracy
- The extent to which it is measuring the construct we are interested in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Validity?

A

Accuracy
- The extent to which it is measuring the construct we are interested in

Simply = Whether the results really do represent what they are supposed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does it mean when our measures have high validity and high reliability?

A

Our measures are precise and accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does it mean when our measures have high validity and low reliability?

A

Our measures are precise but not accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does it mean when our measures have low validity and high reliability?

A

Our measures are not precise but are accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does it mean when our measures have low validity and low reliability?

A

Our measures are neither precise or accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

We want to investigate the relationship between head size and intelligence

Is head size a …?

1) Reliable measure
2) Valid measure

A

1) Reliable measure = Yes. If you were to measure the head again with measuring tape it would be the same size

2) Valid measure = No. Because the head size is not actually measuring intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the 2 forms of reliability?

A

1) Temporal consistency
2) Internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is temporal consistency?

A

Assumes that there is no substantial change in construct being measured between 2 occasions

Simply= If we test our instrument twice, with a time interval between the two tests, there will be no change in results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is internal consistency?

A

The extent to which items within an instrument measure various aspects of some characteristics

Simply = How well does an instrument actually measure what you want it to measure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Test-Retest Reliability?

A

It measures fluctuations from one time to another

We administer the same test/measurement twice over a period of time to a group of individuals and see if we still get the same results

Important for constructs which we expect to be stable (e.g. personality type)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is one limitation of the Test-Retest Reliability?

A

Order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Inter-Rater Reliability?

A

It measures fluctuations between observers

It evaluates the extent to which different judges agree in their assessment decisions (to assess the reliability of answers produced by different items on a test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is Parallel Forms Reliability?

A

Measures reliability by using different versions of the assessment tool/test to the same group of participants

Different versions of a test can be useful to help eliminate memory effects as the questions are different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

List one advantage and one disadvantage of Parallel Forms Reliability?

A

Pro = Different versions of a test can be useful to help eliminate memory effects as the questions are different

Con = Order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is internal consistency (reliability)?

A

Measures the degree of homogeneity among the items on a test, such that they are consistent with one another and measuring the same thing

e.g. all items in a questionnaire are measuring
the same construct

20
Q

What is one disadvantage of internal consistency (reliability)?

A

Order effects

21
Q

What are the 4 forms of reliability?

A

1) Test-retest reliability
2) Inter-rater reliability
3) Parallel forms
4) Internal consistency

22
Q

What are the 4 forms of validity?

A

1) Face validity
2) Content validity
3) Criterion validity
4) Construct validity

23
Q

What is face validity?

A

Evaluates whether a test appears to measure what it’s supposed to measure (is it relevant to what it’s assessing?)

Does it look like a good test?

e.g. do the questions in the RM exam reflect the RM knowledge students should have learnt?

24
Q

What is content validity?

A

The extent to which a test/measurement tool evaluates all aspects of the topic, construct or behaviour it is designed to measure

Does our test measure the construct fully?

e.g. the RM exam should cover knowledge of quantitative and qualitative methods

25
Q

What is criterion validity?

A

The extent of how well/accurately a test measures the outcome (and predict future outcome) it was designed to measure

Does the measure give results which are in agreement with other measures of the same thing?

e.g. do RM exam quiz scores relate to final exam grades?

26
Q

What is the difference between Concurrent and Predictive Criterion validity?

A

Concurrent = Comparison of new test with established test

Predictive = Does the test predict the outcome of another variable?

27
Q

What is construct validity?

A

The extent of how well a test measure the concept/trait it was designed to evaluate?

Is the construct we are trying to measure valid?
i.e. does the construct itself exist?

The validity of a construct is supported by cumulative research evidence collected over time

28
Q

What are the 2 ways construct validity can be assessed?

A

1) Convergent validity = Whether our measurement correlates with other tests of related constructs

e.g. Our test for general happiness should correlate with studies of extreme happiness, moderate happiness and other similar studies

2) Discriminant validity = When our measurement doesn’t correlate with tests of different or unrelated constructs

e.g. Our test for general happiness should not correlate with studies of depression, low happiness and other unrelated studies

29
Q

What is convergent validity?

A

Whether our measurement correlates with other tests of related constructs

e.g. Our test for general happiness should correlate with studies of extreme happiness, moderate happiness and other similar studies

30
Q

What is discriminant validity?

A

When our measurement doesn’t correlate with tests of different or unrelated constructs

e.g. Our test for general happiness should not correlate with studies of depression, sadness and other unrelated studies

31
Q

What validity is this?

Does our test measure the construct fully?

A

Content validity

32
Q

What validity is this?

The comparison of new tests with established tests

A

Concurrent Validity

33
Q

What validity is this?

Does the test correlate with measures of the same and related constructs?

A

Convergent Validity

34
Q

What validity is this?

Does it look like a good test?

A

Face validity

35
Q

What validity is this?

Is there a lack of correlation with measures of different and unrelated constructs?

A

Discriminant validity

36
Q

What validity is this?

Does the measure give results which are in agreement with other measures of the same thing?

A

Criterion validity

37
Q

What validity is this?

Does the test predict the outcome on another variable/some other variable measuring a different construct?

A

Predictive validity

38
Q

What validity is this?

Is there evidence that the construct exists?
Is the construct we are trying to measure valid?

A

Construct validity

39
Q

If a test gives the same result at two different points in time, it has demonstrated good (Inter-rater/Test-retest/Parallel form) reliability.

A

Test-retest reliability

40
Q

If the items of is scale a correlated with one another, the test has demonstrated good (Inter-rater/Internal consistency/Parallel forms) reliability

A

Internal consistency

41
Q

One way to assess internal consistency is by calculating (Inter-rater/Split half/Test-retest) reliability

A

Split half reliability

42
Q

If two different people administer the same test to the same people and their results are different, the test has demonstrated poor (Inter-rater/Test-retest/Parallel form) reliability

A

Inter-rater reliability

43
Q

If an individual is given two different versions of the same test and does well on one but badly on the other, the tests have demonstrated poor (Inter-rater/Test-retest/Parallel form) reliability

A

Parallel forms reliability

44
Q

What type of reliability measures fluctuations from one-time point to another?

a. Inter-rater reliability

b. Internal consistency

c. Parallel forms reliability

d. Test-retest reliability
A

D

45
Q

Parallel forms reliability is evidence that…

a. A construct is stable

b. A construct is valid

c. Researchers agree on their ratings

d. Two tests measure the same thing
A

D

46
Q

The validity of a measure refers to the:

a. Comprehensiveness of the measurement

b. Consistency of the measurement

c. Particular type of construct specification

d. Accuracy with which it measures the construct
A

D

47
Q

Order effects are NOT problematic for which of the following types of reliability…

a. Test-retest

b. Inter-rater

c. Parallel forms

d. Internal consistency
A

B