week 9 Flashcards
What are the two types of surveys?
- Ad hoc
2. Onnibus
What does Ad hoc mean?
single purpose
What does Omnibus mean?
Multipurpose (the topics must be compatible)
What are two ways in which surveys can be self-completed?
- Postal
2. Internet
What are three ways that surveys can be administered?
- Interview
- Telephone
- Face to face
What are the three bias’ that are involved with self-survey?
- memory
- knowledge
- social desirability
What are the four limitations regarding surveys?
- bias of self report
- poor quality information
- sample bias (low response rate)
- interviewer bias
What are the 5 strengths of surveys?
- low cost
- confidential
- standardised
- generate large amounts of data
- amendable to multivariate analysis
What is the first step in questionnaire development and design?
Conducting the preliminary work. This includes consulting with experts in the field and conducting qualitative studies
What is the second step in questionnaire development and design?
Determining:
- hypothesis/research questions
- appropriate theoretical framework
- variables to be studies
What is the third step in questionnaire development and design?
Compiling research questions
What are the 6 aspects to think about when compiling research questions?
- comprehensive -define terms
- short
- unambiguous
- not ‘leading’
- not negative
- appropriate -filter questions
What is the fourth step in questionnaire development and design?
Pilot a draft questionaire
What is the fifth step in questionnaire development and design?
Revise the questionaire
What are the 6 types of responses involved in a questionaire?
- yes/no
- multiple choice
- numeric open end
- text open end
- likert scales
- visual analogue scales
Give an example of a numeric open end question?
How much did you spend on groceries this week? (number answer)
Give an example of a text open end question on a questionnaire
How can our company improve its working conditions?
What are two types of visual analogue scales?
- numeric rating scales
2. visual analogue scale (no numbers just mark on a line)
In survey sampling, what does population refer to?
All possible cases
What does sample mean in regard to survey sampling, and what are three ways to describe it?
A subset of a population
- derived from the population
- a microcosm of the population
- important characteristics distributed similarly to the population
What are two ways in which sampling reduces random error?
- sample size- large enough so there is power to draw conclusions or uphold the null hypothesis
- sample representativeness -conclusions can be generalised to the population
What does random mean in probability sampling?
all members of a population have equal chance of being selected
What is the systematic way in probability sampling?
Where every “nth” person is selected
What is cluster when it comes to probability sampling?
A naturally occurring unit including a range of characteristics (schools, hospitals, retail outlets, etc)
What is stratified random sampling in relation to probability sampling?
Divide population into subgroups from each of which a random sample is drawn
Finish the sentence: a random sample may not include
the full range of relevant characteristics
What are two types of non-probability sampling?
convenience - enlist as people appear
snowball - respondents nominate other to take part
What is one type of representative sampling (e.g in terms of demographic characteristics)?
Quota sampling - enlist a given number within a category- e.g make proportion in each age group same as or similar to that of the general population.
What is representative sampling prone to?
Selection bias
What are three ways to overcome measurement erroe?
- response rate
- reliability
- validity
Define response rate?
Number who respond divided by the number eligible
-over-sampling advised to compensate for non-response
What does reliability mean in terms of questionnaires/surveys?
How reproducible the results arising from the questionnaire/scale
What is test-retest in testing reliability in surveys/questionnaires?
The same respondents repeat the survey on two or more occasions
What is alternative form?
Same respondents completing two versions of the questionaire
What is split-half in testing reliability?
Two scales measuring thee same factors
What is the correlation that reflects an instruments reliability?
r = >0.70
What does validity mean in regard to a questionnaire/survey? (2)
- how well the questionnaire measures what it claims to measure, 2. the degree to which your measure assesses what it is supposed to measure
What are the four types of validity that may need to be tested in a questionaire?
- face validity
- content validity
- construct validity
- criterion related validity
What are the two types of criterion related validity?
- Predictive validity
2. concurrent validity
What does face validity mean?
How appropriate the test items appear; do the items appear to be measuring what you want ‘on the face of it’?
How is face validity assessed when it is assessed informally?
Using the big 5 extraversion item
Define content validity
Extent to which the specification of the test actually reflects the purpose of the test
In testing the content validity, one way is to make sure the items are appropriate. The items need to be judged by experts in the field to be
- inclusive
2. relevant
Using the example of Beck’s Depression Inventory (BDI), show how content validity would be assessed: (3)
- Does Beck’s Depression inventory really measure depression?
- Does it cover all aspects of depression such as behaviour, cognitions, emotions?
- Items must assess every aspect fo a construct which is evidenced in the literature, e.g self-regulation of emotion should cover both positive and negative emotions
What is concurrent validity?
The correlation between a new test and existing tests which claim to measure the same construct (e.g revised or shorter version of an existing test)
What are two examples of tests in which concurrent validity may be tested with the other, that claim that the same construct is measured?
- profile mood states and positive and negative affect scale
- dutch eating behaviour questionnaire and 3 factor eating behaviour questionnaire
What is predictive testing in concurrent validity look like?
One test administered before the other
what is concurrent testing in concurrent validity look like?
Tests given at the same time
define predictive validity?
Measure of how well one variable, or set of variables, predicts an outcome based on information from other variables
How is predictive validity measured?
By the correlation between the test and the future even E.g, does disinhibition scale (measure of the tendency to overeat) predict weight status?
Define construct validity
how well the instrument performs in practical situations
What does convergent relate to in construct validity?
The degree to which produces similar results to other measured and methods researching the same variables
what does divergent mean in terms of construct validity?
The degree to which the instrument can distinguish between groups known to vary in relation to the variables being researched
Define criterion (or concrete) validity
Criterion or concrete validity is the extent to which the measures are demonstrably related to concrete criteria in the “real” world.
What is one example of criterion related validity?
Does the BDI distinguish between clinical and non clinical and non clinical depression and/or between depressed and non-depressed individuals?
What is another example of criterion validity?
Disinhibition (tendency to overeat) related to body weight, do those with a higher body weight have a higher disinhibition.
What does predictive mean in regard to criterion validity?
Ability to predict outcomes influenced by the variables researched
What does concurrent mean in regard to criterion validity?
How well the items compare with other ways of measuring the same variable
Can a measure be reliable and not valid?
Yes (e.g intelligence in children and shoe size)
If a measure is valid, what does this mean in terms of reliability?
If a measure is valid, it is usually reliable.
What is the first way to describe factor analysis?
- used to analyse interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors)
What is the second way to describe factor analyis?
A way of condensing the information contained in a number of original variables into a smaller set of dimensions (factors) with a minimum loss of information
What does factor analysis determine?
If the psychometric tool is measuring the same thing in e.g different populations, time points, locations e.t.c
What are 5 examples of factor analysed tools?
- the three factor eating questionnaires
- eyseneck’s personality inventory
- geriatric depression scale
- cattell’s 16 personality factor questionnaires
- the big 5 personality
Criterion related validity can test between groups. Give an example of this in relation to disinhibition and hunger
Disinhibition and hunger are related to over eating. To assess the validity of this we assess the association with body weight and satiety to see if those who are heavier have a higher overeating score