Week 4: Survey Methods Flashcards

1
Q

What is descriptive research?

A
  • Descriptive research: description of individual variables
  • Describing how things are, rather than explaining why they are like that
  • Observational & Survey Research
  • RQ: What is the typical number of hours spent studying each week?
  • Survey the participants then perform descriptive statistics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Advantages of surveys?

A
  • Limits of behavioural observation
  • Allows gathering large amounts of information fairly easily
  • On-line surveys & global research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Disadvantages of surveys?

A
  • Based on self-report
  • Biases eg. social desirability (intentional or un)
  • Memory errors & limits of insight
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An example of a research survey that measures individual differences?

A

E.g., Five factor model of personality (Costa & McCrae, 1992)
Measures Psychological traits or characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An example of a research survey that measures ability?

A

Intelligence tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An example of a research survey that measures attitudes?

A

E.g. a likert scale questionnaire about how much you like work
(Measure particular beliefs toward something e.g. work)
e.g. I am satisfied with the work I do.
Strongly disagree 1 2 3 4 5 6 7 Strongly agree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What two things should you consider ethically/morally about surveys?

A

1) Need to ensure that the measures being used and reliable and valid.
2) Aware of equal opportunities and cultural biases.
Knife example

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are open format questions?

Adv and Dis?

A

Introduces a topic and allows participants to respond in their own words. Have no determined set of responses.

“Tell us about the occasions when you have been academically vindictive.”

Adv: Lead to more qualitative data, gives participant greatest flexibility. May find out things interviewer hadnt thought about before.

Dis: Time consuming AND difficult to analyse. Difficult to compare answers with different content across participants. Sometimes researcher might have to impose own subjective interpretation on an answer to code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are closed format questions?

Adv and Dis?

A

are short questions or statements followed by a number of options. Has limited number of response alternatives. Like a multiple choice.

E.g. I feel bitter towards those who do better than me on my course.
Strongly disagree Disagree Not certain Agree Strongly agree

Adv: easy to analyse and summarise. Can code things numerically.

Dis: you wont find anything out beyond what you were expecting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are three ways you could get advice on how to write your questions for your survey?

A

Theoretical literature: Ideas that appear in the theoretical literature should be used as a basis.
Experts: Recruit experts in the area to suggest items.
Colleagues: Can help you to generate more items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the problem with this survey question and how could you fix it?

“If I had the opportunity, resources and ability to change other students’ exam grades so that mine was the best, I would do it.”

A

Problem: asking about too many factors. Respondents may concentrate on opportunity, resources, and ability to different extents.

Solution - remove ambiguity:
If I had the opportunity to change other students’ exam grades so that mine was the best, I would do it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Should we use or avoid leading questions?

A

Avoid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are leading questions?

A

a question phrased in such a way you’re leading
the person to answer the question in a certain way

e.g. Mr Woolley, are you worried about the danger
of war?

e.g. “Do you agree with the majority of Australians that it is wrong to falsely tell a classmate the wrong exam date, so that they miss the exam?”

Problem:
Leads the respondent in a particular direction by indicating what “a majority of Australians” think.

Solution:
“Is it wrong to falsely tell a classmate the wrong exam date, so that they miss the exam?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is reverse wording?

A

Reversing the wording/phrasing on some questions to get a stronger and more valid measure.

For example, if we want to measure retirement confidence we might ask respondents how much they agree with the statement “I am confident that I will be able to live comfortably in retirement” and “I worry about being able to make ends meet in retirement.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does reverse wording reduce the problem of?

A

Positivity bias (someone who tends to agree with all questions)

Negativity bias. (someone who tends to agree with all questions)

Also raises questions about data for those people just clicking random answers to finish the survey.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a dichotomous response format?

A

only two opposite answers/options

e.g. (yes/no true/false)

Problem: forced choice as well. But this can be a bad thing, removes that nuances, if they genuinely dont know
or care, then forcing them isnt necessarily good either.
e.g. Do you suffer from nerves? yes or no?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a ‘frequency of behaviour’ response format?

A

Asks about activities/behaviour

e.g. I get upset and let my emotions out.
• I usually don’t do this at all
• I usually do this a little bit
• I usually do this a medium amount • I usually do this a lot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What response format has “Strongly agree – strongly disagree”

A

Likert scale

e.g. Overall, I expect more good things to happen to me than bad.
Strongly Disagree Disagree Not certain Agree Strongly Agree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a numerical scale response format?

How does it differ from likert scale?

A

It is a likert scale actually and works pretty similar to agree, strongly agree etc.
BUT end points are anchored semantically.

e.g. I feel unsure of myself.
Not at all like me Very much like me
1 2 3 4 5 6 7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why are instructions important for a test?

A

Because they:

  • can for example ask to not overthink questions and be intuitive e.g. personality test, or can ask to consider responses carefully.
  • can give an idea of state vs trait questionnaire. State means “Indicate to what extent you feel this way right now, that is, at the present moment” VS trait which is testing something generally stable over time.
21
Q

Explain this equation to me:

OBSERVED score = TRUE score + Error

What is it even referring to?

A

It is referring to the Classical theory of error in measurement. It is related to measuring reliability (consistency of a measure).

E.g. trying to measure your intelligence with IQ test.
Score you get is partially determined by your true score (actual level of intelligence, quite stable) + your error (a bunch of other factors like fatigue, level of health, current mood, hunger, luck).

Note: over a multiple number of tests, the increases and decreases caused my error should average to zero.
Note: observed score can also be called measured score.

More notes:
• Standard error of measurement
• Universe of items
• All items correlate to some extent with the true score
• Reliability is related to the average correlation between items and test length

22
Q

OBSERVED score = TRUE score + Error

If error is small, my scores would be relatively …..
from one measurement to another, and therefore……..

A

consistent

reliable

23
Q

OBSERVED score = TRUE score + Error

If error is big, my scores would be relatively …..
from one measurement to another, and therefore…….

A

inconsistent

NOT FUCKING RELIABLE

24
Q

What are four measures of internal reliability?

A

– Split-half reliability
– Parallel forms
– Cronbach’s Alpha
– KR-20

25
Q

What is split half internal reliability?

A

internal reliability: are all questions measuring the same thing?
Split half: can take first half and second half and calculate
scores on first half and second half should correlate btn eachother.

Note:
Items are split into two halves, based on:
• Odd vs. even numbers
• Randomly selecting items for each half
• First half vs. second half of the test

Correlate the total scores for each half
Pearsons R Correlation of 0.80 or higher for good/adequate reliability

26
Q

What is parallel forms and how does it test internal reliability?

A

• Create a large pool of items.
• Randomly divide the items into two separate tests.
• Administer the two tests to the same participants.
• Calculate the correlation between the two forms.
Problem: Difficult to generate the large number of items required.

27
Q

What is Cronbach’s alpha? What does it measure?

A

Measures internal reliability

• Cronbach’s Alpha is mathematically equivalent to the average of all possible split- half estimates.
• Usually a figure of +0.7 or greater indicates acceptable internal reliability.
• Calculates correlation of every possible combination of
half one and half two of test.

28
Q

What does this do:

Kuder-Richardson Formula 20 (KR – 20)

A

DOES THE SAME THING BASICALLY AS CRONBACH BUT ONLY FOR DICHOTOMOUS SCALES

  • Measures internal reliability for measures with dichotomous choices (i.e., 2 choices Yes/No).
  • Usually a figure of +0.7 or greater indicates acceptable internal reliability.
29
Q

Which test of internal reliability involves the creation of two different versions of a questionnaire?

a) Split-half reliability
b) Cronbach’s alpha
c) Parallel forms reliability
d) KR-20

A

Parallel forms bitches

30
Q

External reliability vs internal reliability? Difference?

A

Internal reliability assesses the consistency of results across items within a test. External reliability refers to the extent to which a measure varies from one use to another.

31
Q

What is a measure of external reliability. Describe it.

A

Test restest reliability measure: measures stability of test over time.

  • Perform the same survey, with the same respondents, at different points in time.
  • The closer the results, the greater the test- retest reliability of the survey.
  • The correlation coefficient between the two sets of responses is often used as a quantitative measure of the test-retest reliability.
32
Q

Problems with test retest reliability ?

A

practice effect

33
Q

What is INTER-rater reliability?

Note this doesnt seem to be classified as external or internal in lecture (but in tb it appears to be ext).

A

Inter-rater reliability determines the extent to which two or more raters obtain the same result when coding the same response.

Cohen’s Kappa: larger numbers indicate better reliability, used when there are two raters.
Fleiss’ Kappa: an adaptation which works for any fixed number of raters.
NOTE: Measures agreement, not accuracy. IMP DISTINCTION

34
Q

What is INTRA-rater reliability?

what is a drawback?

A

The same assessment is completed by the same rater on two or more occasions.
These different ratings are then compared, generally by means of correlation.
Problem: Since the same individual is completing both assessments, the rater’s subsequent ratings are contaminated by knowledge of earlier ratings.

35
Q

What are some sources of unreliability in a survey?

A
  • Guessing
  • Ambiguous items
  • Test length
  • Instructions
  • Temperature, illness
  • Item order effects
  • Response rate
  • Social desirability
36
Q

Which measure of reliability measures stability over time?

a) Split-half reliability b) Cronbach’s Alpha c) Test-retest reliability d) Cohen’s Kappa

A

Test retest r

37
Q

what is validity?

A

the degree to which the measurement process measures the variable that it claims to measure

38
Q

Name 5 types of validity

A
• Faith
• Face
• Content
• Construct
     - Convergent
     - Discriminant 
• Predictive
39
Q

what is FAITH validity?

A

Faith Validity is the least defensible type of validity but the most difficult to influence. It is simply a conviction, a belief of blind faith that a selection test is valid. There is no empirical evidence and, what is more, none is wanted.

40
Q

What is FACE validity?

A

Again the least scientific type of validity along with faith.
The superficial appearance, face value of a measurement procedure. e.g. asking yourself does the measurement technique loooook like it measures the variable we want to measure? yeah.

Can have high face validity: as in its obvious what is being tested through the measure (problem participants may adjust their answers to appear socially desirable).
Low face validity: ambiguous

In relation to the academic vindictiveness scale:
Find experts in academic vindictiveness, and ask them to judge whether the questionnaire represents a good measure of that construct.

41
Q

What is content validity?

A

The extent to which a measure represents all facets of the phenomena being measured.

So, in the case of academic vindictiveness:
There might be different types: 
Academic vindictive behaviours 
Academic vindictive attitudes 
Academic vindictive feelings

Content would need to measure all three of these factors for instance.

42
Q

What is construct validity? What are the two components that make up this type of validity?

A

Construct v: Seeks to establish a clear relationship between the construct at a theoretical level and the measure that has been developed.

TWO SUBTYPES:
Convergent validity:
That the measure shows associations with measures that it should be related to, e.g., academic vindictiveness should be related to other aspects of vindictiveness; such as a tendency to seek revenge, or spitefulness.

Discriminant validity:
That the measure is NOT related to things that it should not be related to. If they dont correlate strongly with eachother then thats good in this instance.

e.g. Academic vindictiveness (revenge) should be measuring something different from Five Factors of personality, so should not correlate highly with extraversion, neuroticisim, agreeableness, openness and conscientiousness.

43
Q

What is predictive validity?

A

Assesses whether a measure can accurately predict future behaviour.

Scores on the academic vindictiveness scale should be able to predict people acting in an academically vindictive way in the future:
Not sharing notes
Not helping other people revise

44
Q

Fill in sentence:

“Reliability is a necessary but …. …………. condition for validity

A

not sufficient

45
Q

Distinguish once more for me between reliability and validity.

A

Reliability:
A questionnaire is reliable if all of the questions in your test are consistently measuring the same underlying concept, and that this remains stable over repeated times that the test is administered

Validity:
A test is valid if it is actually measuring what you intend it to measure.

46
Q

Which measure of validity states that a test should represent all facets of the phenomena being measured?

a) Face validity
b) Predictive validity
c) Content validity
d) Faith validity

A

Content validity

47
Q

Tell me what these scales do:

Nominal
Ordinal
Interval
Ratio

A

Nominal: tells us only that difference exists

Ordinal: tells us the direction of the difference (which is more and which is less)

Interval: can determine the direction and the magnitude of the difference.

Ratio: tells us the direction, magnitude and ratio of the difference.

(remember this order, it will help you, as we go down list gets more informed and tells us more). NOIR (black)

48
Q

Non response bias ?

random q im throwing in from tb

A

The idea that youre only testing maybe the motivated people who respond, instead of the lazier ones who dont respond to surveys.