week 9 Flashcards

1
Q

What are the two types of surveys?

A
  1. Ad hoc

2. Onnibus

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does Ad hoc mean?

A

single purpose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does Omnibus mean?

A

Multipurpose (the topics must be compatible)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are two ways in which surveys can be self-completed?

A
  1. Postal

2. Internet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are three ways that surveys can be administered?

A
  1. Interview
  2. Telephone
  3. Face to face
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the three bias’ that are involved with self-survey?

A
  1. memory
  2. knowledge
  3. social desirability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the four limitations regarding surveys?

A
  1. bias of self report
  2. poor quality information
  3. sample bias (low response rate)
  4. interviewer bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 5 strengths of surveys?

A
  1. low cost
  2. confidential
  3. standardised
  4. generate large amounts of data
  5. amendable to multivariate analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the first step in questionnaire development and design?

A

Conducting the preliminary work. This includes consulting with experts in the field and conducting qualitative studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the second step in questionnaire development and design?

A

Determining:

  1. hypothesis/research questions
  2. appropriate theoretical framework
  3. variables to be studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the third step in questionnaire development and design?

A

Compiling research questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the 6 aspects to think about when compiling research questions?

A
  1. comprehensive -define terms
  2. short
  3. unambiguous
  4. not ‘leading’
  5. not negative
  6. appropriate -filter questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the fourth step in questionnaire development and design?

A

Pilot a draft questionaire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the fifth step in questionnaire development and design?

A

Revise the questionaire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the 6 types of responses involved in a questionaire?

A
  1. yes/no
  2. multiple choice
  3. numeric open end
  4. text open end
  5. likert scales
  6. visual analogue scales
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Give an example of a numeric open end question?

A

How much did you spend on groceries this week? (number answer)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Give an example of a text open end question on a questionnaire

A

How can our company improve its working conditions?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are two types of visual analogue scales?

A
  1. numeric rating scales

2. visual analogue scale (no numbers just mark on a line)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In survey sampling, what does population refer to?

A

All possible cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What does sample mean in regard to survey sampling, and what are three ways to describe it?

A

A subset of a population

  1. derived from the population
  2. a microcosm of the population
  3. important characteristics distributed similarly to the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are two ways in which sampling reduces random error?

A
  1. sample size- large enough so there is power to draw conclusions or uphold the null hypothesis
  2. sample representativeness -conclusions can be generalised to the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does random mean in probability sampling?

A

all members of a population have equal chance of being selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the systematic way in probability sampling?

A

Where every “nth” person is selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is cluster when it comes to probability sampling?

A

A naturally occurring unit including a range of characteristics (schools, hospitals, retail outlets, etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is stratified random sampling in relation to probability sampling?

A

Divide population into subgroups from each of which a random sample is drawn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Finish the sentence: a random sample may not include

A

the full range of relevant characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are two types of non-probability sampling?

A

convenience - enlist as people appear

snowball - respondents nominate other to take part

28
Q

What is one type of representative sampling (e.g in terms of demographic characteristics)?

A

Quota sampling - enlist a given number within a category- e.g make proportion in each age group same as or similar to that of the general population.

29
Q

What is representative sampling prone to?

A

Selection bias

30
Q

What are three ways to overcome measurement erroe?

A
  1. response rate
  2. reliability
  3. validity
31
Q

Define response rate?

A

Number who respond divided by the number eligible

-over-sampling advised to compensate for non-response

32
Q

What does reliability mean in terms of questionnaires/surveys?

A

How reproducible the results arising from the questionnaire/scale

33
Q

What is test-retest in testing reliability in surveys/questionnaires?

A

The same respondents repeat the survey on two or more occasions

34
Q

What is alternative form?

A

Same respondents completing two versions of the questionaire

35
Q

What is split-half in testing reliability?

A

Two scales measuring thee same factors

36
Q

What is the correlation that reflects an instruments reliability?

A

r = >0.70

37
Q

What does validity mean in regard to a questionnaire/survey? (2)

A
  1. how well the questionnaire measures what it claims to measure, 2. the degree to which your measure assesses what it is supposed to measure
38
Q

What are the four types of validity that may need to be tested in a questionaire?

A
  1. face validity
  2. content validity
  3. construct validity
  4. criterion related validity
39
Q

What are the two types of criterion related validity?

A
  1. Predictive validity

2. concurrent validity

40
Q

What does face validity mean?

A

How appropriate the test items appear; do the items appear to be measuring what you want ‘on the face of it’?

41
Q

How is face validity assessed when it is assessed informally?

A

Using the big 5 extraversion item

42
Q

Define content validity

A

Extent to which the specification of the test actually reflects the purpose of the test

43
Q

In testing the content validity, one way is to make sure the items are appropriate. The items need to be judged by experts in the field to be

A
  1. inclusive

2. relevant

44
Q

Using the example of Beck’s Depression Inventory (BDI), show how content validity would be assessed: (3)

A
  1. Does Beck’s Depression inventory really measure depression?
  2. Does it cover all aspects of depression such as behaviour, cognitions, emotions?
  3. Items must assess every aspect fo a construct which is evidenced in the literature, e.g self-regulation of emotion should cover both positive and negative emotions
45
Q

What is concurrent validity?

A

The correlation between a new test and existing tests which claim to measure the same construct (e.g revised or shorter version of an existing test)

46
Q

What are two examples of tests in which concurrent validity may be tested with the other, that claim that the same construct is measured?

A
  1. profile mood states and positive and negative affect scale
  2. dutch eating behaviour questionnaire and 3 factor eating behaviour questionnaire
47
Q

What is predictive testing in concurrent validity look like?

A

One test administered before the other

48
Q

what is concurrent testing in concurrent validity look like?

A

Tests given at the same time

49
Q

define predictive validity?

A

Measure of how well one variable, or set of variables, predicts an outcome based on information from other variables

50
Q

How is predictive validity measured?

A

By the correlation between the test and the future even E.g, does disinhibition scale (measure of the tendency to overeat) predict weight status?

51
Q

Define construct validity

A

how well the instrument performs in practical situations

52
Q

What does convergent relate to in construct validity?

A

The degree to which produces similar results to other measured and methods researching the same variables

53
Q

what does divergent mean in terms of construct validity?

A

The degree to which the instrument can distinguish between groups known to vary in relation to the variables being researched

54
Q

Define criterion (or concrete) validity

A

Criterion or concrete validity is the extent to which the measures are demonstrably related to concrete criteria in the “real” world.

55
Q

What is one example of criterion related validity?

A

Does the BDI distinguish between clinical and non clinical and non clinical depression and/or between depressed and non-depressed individuals?

56
Q

What is another example of criterion validity?

A

Disinhibition (tendency to overeat) related to body weight, do those with a higher body weight have a higher disinhibition.

57
Q

What does predictive mean in regard to criterion validity?

A

Ability to predict outcomes influenced by the variables researched

58
Q

What does concurrent mean in regard to criterion validity?

A

How well the items compare with other ways of measuring the same variable

59
Q

Can a measure be reliable and not valid?

A

Yes (e.g intelligence in children and shoe size)

60
Q

If a measure is valid, what does this mean in terms of reliability?

A

If a measure is valid, it is usually reliable.

61
Q

What is the first way to describe factor analysis?

A
  1. used to analyse interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors)
62
Q

What is the second way to describe factor analyis?

A

A way of condensing the information contained in a number of original variables into a smaller set of dimensions (factors) with a minimum loss of information

63
Q

What does factor analysis determine?

A

If the psychometric tool is measuring the same thing in e.g different populations, time points, locations e.t.c

64
Q

What are 5 examples of factor analysed tools?

A
  1. the three factor eating questionnaires
  2. eyseneck’s personality inventory
  3. geriatric depression scale
  4. cattell’s 16 personality factor questionnaires
  5. the big 5 personality
65
Q

Criterion related validity can test between groups. Give an example of this in relation to disinhibition and hunger

A

Disinhibition and hunger are related to over eating. To assess the validity of this we assess the association with body weight and satiety to see if those who are heavier have a higher overeating score