Chapter 4 Flashcards

1
Q

What is conceptualization and how is it relevant to a research study

A

The process of specifying what we mean by the term. And deductive research, conceptualization helps to translate portions of an abstract theory into testable hypotheses involving specific variables. And inductive research conceptualization is an important part of the process used to make sense of related observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do you identify nominal measurement

A

Mutually exclusive categories, exhaustive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you identify ordinal measurement

A

Mutually exclusive categories, order hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you identify interval measurement

A

Mutually exclusive categories, order hierarchy, equal spacing between values (numerical)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you identify ratio measurement

A

Mutually exclusive categories, order hierarchy, equal spacing between values (numerical), true zero/absolute zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the two types of measurement error

A

Random errors, not consistent or predictable; systematic error, consistent and patterned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three types of systematic errors

A

Social desirability, acquiescence bias, leading questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the difference between reliability and validity

A

Reliability- a measure’s ability to yield consistent or equivalent results/scores each time it is applied (when the phenomenon is not changing).
Validity is the accuracy of a measure; this exists when an instrument really measures what we think it measures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is test – retested

A

A commonly used type of measurement reliability that involves giving a group of people a test and then giving them the same test to compare results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When giving a test multiple times what are some of the potential effects

A

Recalling previous answers, changing responses for variety, become bored with instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define alternative or multiple forms

A

A type of measurement reliability that involves creating two separate but equivalent forms of the same instrument and administering to the same group of people during the same session

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define internal consistency approaches

A

A type of measurement reliability that involves using a single scale administered to the same people to establish validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is an example of an internal consistency approach

A

The split half approach involves administering one test and then randomly dividing it in half and assessing to see whether the two halves are correlated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define interrater reliability

A

A type of measurement reliability; the degree of agreement when similar measures are obtained by different observers using the same people, events, or places

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define intrarater reliability

A

Consistency of ratings by an observer of an unchanging phenomenon at two or more points in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define face validity

A

It is the logical relationship between the variable and the proposed measure

17
Q

Define content validity or sampling validity

A

Does it cover the full range of meanings or forms. An adequate measure provides adequate or representative sample of all content or elements or instances of the phenomenon being measured

18
Q

Define criterion validity and list the two types

A

Comparing your measure to another existing measure. Concurrent validity and predictive validity

19
Q

Define concurrent validity

A

A type of criterion validity that compares the instrument under evaluation to some already-existing criterion, such as the results of another measuring device that is already being tested for validity

20
Q

Define predictive validity

A

Assessing to see if the instrument predicts some future behavior or state of affairs

21
Q

Define construct validity

A

Very complex; involves relating a tool to an overall theoretical framework to determine if the instrument is correlated with all the concepts and propositions of the framework. Example: multi-trait multi method approach

22
Q

Define multi-trait multimethod approach

A

Based on two concepts: two instruments that are valid measures of the same concept should correlate with each other and two instruments, even if similar, should not correlate if they do not measure the same concept

23
Q

What are six ways to improve the reliability and validity of existing measures

A

Have extensive conceptual development, better training of those who will administer measuring devices, get feedback from research subjects regarding the measurement device, use higher level of measurement, use more indicators of a variable, conduct an item-by-item assessment of multiple-item measures

24
Q

What is a concept

A

A mental image that summarizes a set of similar observations, feelings, or ideas

25
Q

If we have a scale to assess depression and we decide to establish a cut off point of 60 points what have we done

A

We have operationalized our variable

26
Q

Julie included the variable age in her instrument. What level of measurement is this

A

Not enough information to answer

27
Q

Measurement moves from the blank to blank

A

Abstract to concrete

28
Q

And research what are the two types of measurement for concepts

A

Nominal is the dictionary type of measurement. Operationalization makes the concept measurable.