Questionnaire Design Flashcards

1
Q

What type of research are questionnaires crucial for

A

Individual differences
Intelligence tests
Attitudes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What ethics need to be met with questionnaires

A

Equal oppotunties and cultural biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the questionnaire measuresneed to be

A

Reliable and valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the process of creating your questionnaire

A
Question formats
Writing your questions
Clarity of questions
Avoid leading questions
Reverse wording
Response formats
Clear instructions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define an open formt question

A

asks for some written detail, but has no determined set of responses, e.g.,

“Tell us about the occasions when you have been academically vindictive.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the advantages of open format questions

A

Leads to more qualitative data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the disadvantages of open format questions

A

Time consuming to analyse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define closed format. questions

A

Short questions or statements followed by a number of options

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the three ways you could based the writing of your questions on

A

Theoretical literature
experts
colleagues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What could theorterial literature help with when writing questions

A

Ideas that appear in the theoretical literature should be used as a basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What could expects help with when writing questions

A

Recruit experts in the are to suggest items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What can colleagues help with

A

Can help you generate more items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is clarity of questions important

A

Respondents may concentrate on opportunity, resources, and ability to different extents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the problem with leading questions

A

Leads to respondents in a particular direction by potentially excusing the behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is reverse wording used

A

To ensure people are reading the questions properly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What type of scales are Yes/No, True/False questions

A

Dichotomous scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What type of scale are “I usually don’t do this at all
I usually do this a little bit
I usually do this a medium amount
I usually do this a lot” questions

A

Frequency of behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Does strongly agree-strongly disagree count as a response format

A

YES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Can numbers be used with a statement as a type of response format

A

Yes, numerical scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Are instructions important

A

YES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do researchers draw participants attention to the key part of the instructions

A

Underlining

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
Which of the following is a dichotomous scale
Strongly Disagree  DisagreeAgree
True/False
 I usually don’t do this at all 
        1         2         3
A

True/false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In the classical theory of error in measurement what does the observer score equal

A

The true score + error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Where does the classical theory of measurement error value come from

A

Standard error of the measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How does the classical theory help with measurement

A

it is universal of items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which items correlate to the true score

A

All items correlate to some extent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is reliability related to

A

The average correlation between items and test length

28
Q

What are the 4 types of relaibility

A

Internal
External
Inter-rater
Intra-rater

29
Q

How can internal reliability be measured

A

Split-half reliability

Parallel forms

Cronbach’s Alpha

KR-20

30
Q

How can external reliability be measured

A

Test-retest

31
Q

How can inter-rater reliability be tested

A

Kappa

32
Q

How does split-half reliability measure of internal reliability work

A

Split items into 2 half, e.g. odds v evens, randomly selecting
correlate the total scores for each half

33
Q

What level of correlation in a split half reliability indicates reliability

A

.80

34
Q

How does parallel forms reliability measure of internal reliability work

A

Create a large pool of items
Randomly divide
Administer 2 tests to same participants
Calculate the correlation between the two forms

35
Q

What is the problem with parallel forms

A

Difficult to generate the large number of items required

36
Q

How is Cronbach’s alpha measured

A

Average of all the possible split half estimates

37
Q

To what value does Cronbach alpha measure

A

Values up to +1.00

38
Q

What figure does Cronbach alpha usually have

A

+0.70

39
Q

What indicates good reliability in the Croncbach’s alpah

A

The greater the figure

40
Q

What does Kuder-Richardson Formula 20 (KR – 20) measure

A

internal reliability for measures with dichotomous choices (i.e., 2 choices Yes/No).

41
Q

What leave of values does Kuder-Richardson produce

A

1.0

42
Q

What figure indicates reliability with Kuder-Richardson

A

0.70

43
Q

Which test of internal reliability involves the creation of two different versions of a questionnaire?

A

Parallel forms of reliability

44
Q

How does test-retest reliability measure external reliability

A

Perform the same survey, with the same respondents, at different points in time.

45
Q

What do the results of test-retest show

A

The closer the results, the greater the test-retest reliability of the survey.

46
Q

What is used as a measure of the test-retest relaibility in external validity

A

The correlation coefficient between the two sets of responses is often used

47
Q

Define inter-rater reliability

A

Inter-rater reliability determines the extent to which two or more raters obtain the same result when coding the same response.

48
Q

What measures can be used for inter-rater reliability

A

Cohen’s Kappa: Values up to +1.00, larger numbers indicate better reliability, used when there are two raters.
Fleiss’ Kappa: An adaptation which works for any fixed number of raters.

49
Q

What does the inter-rater measure

A

Agreement not accuray

50
Q

What does intra-rater reliability measure

A

The same assessment is completed by the same rater on two or more occasions

51
Q

How does intra-rater reliability work

A

Different ratings are then compared generally by means of correlation

52
Q

What is a problem of intra-rater reliability

A

Since the same individual is completing both assessments, the rater’s subsequent ratings are contaminated by knowledge of earlier ratings.

53
Q

What are some of the sources of unreliability

A
Guessing
Ambiguous items
Test length
Instructions
Temperature, illness
Item order effects
Response rate
Social desirability
54
Q

Which measure of reliability measures stability over time?

A

Test-retest reliability

55
Q

Which factors can impact validity

A
Faith
Face
Content
Construct
Convergent
Discriminant
Predictive
56
Q

Define faith validity

A

Simply a belief in the validity of an instrument without any objective data to back it up, and the evidence is not wanted

57
Q

Define face validity

A

If something has face validity, it looks like a test that measures the concept it was designed to measure.

58
Q

Define content validity

A

The extent to which a measure represents all facets of the phenomena being measured.

59
Q

Define construct validity

A

Seeks to establish a clear relationship between the construct at a theoretical level and the measure that has been developed.

60
Q

What is convergent validity

A

That the measure shows associations with measures that it should be related to, e.g., academic vindictiveness should be related to other aspects of vindictiveness; such as a tendency to seek revenge, or spitefulness.

61
Q

What is discriminant validity

A

That the measure is NOT related to things that it should not be related to.

62
Q

Define predictive validity

A

Assesses whether a measure can accurately predict future behaviour.

63
Q

Which measure of validity states that a test should represent all facets of the phenomena being measured?

A

Content validity

64
Q

How is a questionnaire reliable

A

A questionnaire is reliable if all of the questions in your test are consistently measuring the same underlying concept, and that this remains stable over repeated times that the test is administered.

65
Q

What makes a valid test

A

A test is valid if it is actually measuring what you intend it to measure.

66
Q

What is necessary but not sufficient for validity

A

Reliability