211 Unit #5 Flashcards

1
Q

What is reliability

A

How dependent, consistent, accurate and comparable the measurement is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is stability

A

consistency of repeated measurements (reproducibility)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is equivalence

A

compare measures between two or more observers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is homogeneity

A

internal consistency- correlation of various items within the instrument- all items within a tool measure the same concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are four measurement tools to measure homogeneity

A

1)item-to-total correlation
2split-half reliability
3)kuder-richardson coefficient (two options;Tor F, A or B, yes or no)
4)cronbachs alpha (likert)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is validity

A

whether a measurment instrument accurately measures what it is meant to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are three types of validity testing?

A

content, criterion-related, construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is content validity

A

represents the phenomena to be studied- determined by a panel of experts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is construct validity

A

represents the extent to which a test measures a theoretical construct- validates an underlying theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is criterion validity

A

the relationship of the instrument to some already known external criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are the two types of criterion validity

A

predictive and concurrent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is predictive validity

A

the ability of the instrument to predict an individuals behaviour in the future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is concurrent validity

A

how well an instrument concurs with another instrument known to be valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are two types of error

A

random and systematic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is random error

A

varies in any direction- reliability concern

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is systematic error

A

constant error in the same direction- validity concern

17
Q

Why should the validity and reliability of study instruments be assessed while critiquing research reports?a. To determine the utility of the instruments for triangulation.

b. To assess the relationships between hypotheses and research questions.
c. To determine whether concepts and variables were measured adequately.
d. To assess whether the concept under study is being treated as a dependent variable or an independent variable.

A

c. To determine whether concepts and variables were measured adequately.

18
Q

The validity of a new instrument developed to measure peripheral neuropathy has been determined to be very high. What does this attribute mean?

Select one:

a. It is sensitive but not specific.
b. Its use results in minimal random errors.
c. It accurately measures peripheral neuropathy.
d. Determination of inter-rater reliability is unnecessary.

A

c. It accurately measures peripheral neuropathy.

19
Q

Which of the following characteristics describes an instrument that is administered repeatedly and obtains the same results?

Select one:

a. Validity
b. Reliability
c. Consistency
d. Predictability

A

b. Reliability

20
Q

The reliability coefficient of a new instrument designed to measure anxiety is established at 0.86. What is the correct interpretation of this finding?

Select one:

a. High error variance; high reliability
b. High error variance; low reliability
c. Low error variance; high reliability
d. Low error variance; low reliability

A

c. Low error variance; high reliability

21
Q

A researcher developed an instrument to measure self-esteem and administered it to a group of individuals who were intravenous substance abusers and to a group of people who were not, expecting to see significant differences in scores between the two groups. How should this method of establishing construct validity be categorized?

Select one:

a. Convergent validity.
b. Discriminant validity
c. Contrasted-groups approach
d. Factor analysis

A

c. Contrasted-groups approach

22
Q

Testing of a new instrument demonstrates that it has a high degree of internal consistency. What does this mean?

Select one:

a. The instrument has low measurement error and high error variance.
b. The instrument is valid, but the reliability has yet to be determined.
c. The instrument is appropriate to measure a single concept.
d. More refinement of the instrument is needed before it can be applied

A

c. The instrument is appropriate to measure a single concept. Internal consistency or homogeneity reliability indicates that the items within the scale measure the same concept.

23
Q

Under what condition should a Kuder-Richardson (KR-20) coefficient be used to establish the internal consistency of an instrument?

Select one:

a. When questions or statements require a yes or no response
b. When questions are open-ended
c. When the instrument uses a Likert-type response scale
d. When the instrument is designed to measure more than one concept

A

a. When questions or statements require a yes or no response The KR-20 coefficient provides estimates of homogeneity used for instruments that have a dichotomous response format.

24
Q

What type of validity is demonstrated in measuring the cognitive knowledge of wound care by (1) administering a test in which all the items relate to wound care and (2) evaluating students’ performance in caring for patients with wounds in the clinical setting?

Select one:

a. Construct validity
b. Content validity
c. Criterion-related validity
d. Face validity

A

c. Criterion-related validity

25
Q

Under what condition should Cronbach’s alpha coefficient be used to establish the internal consistency of an instrument?

Select one:

a. When the instrument uses a Likert-type response scale.
b. When the instrument is designed to measure more than one concept.
c. When questions or statements require a yes or no response.
d. When questions are open-ended.

A

a. When the instrument uses a Likert-type response scale.

26
Q

The aspect of reliability for which inter-rater reliability is appropriate is:

Select one:

a. homogeniety.
b. internal consistency.
c. stability.
d. equivalence.

A

d. equivalence.