Questionnaire Design Flashcards

1
Q

What type of research are questionnaires crucial for

A

Individual differences
Intelligence tests
Attitudes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What ethics need to be met with questionnaires

A

Equal oppotunties and cultural biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the questionnaire measuresneed to be

A

Reliable and valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the process of creating your questionnaire

A
Question formats
Writing your questions
Clarity of questions
Avoid leading questions
Reverse wording
Response formats
Clear instructions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define an open formt question

A

asks for some written detail, but has no determined set of responses, e.g.,

“Tell us about the occasions when you have been academically vindictive.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the advantages of open format questions

A

Leads to more qualitative data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the disadvantages of open format questions

A

Time consuming to analyse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define closed format. questions

A

Short questions or statements followed by a number of options

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the three ways you could based the writing of your questions on

A

Theoretical literature
experts
colleagues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What could theorterial literature help with when writing questions

A

Ideas that appear in the theoretical literature should be used as a basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What could expects help with when writing questions

A

Recruit experts in the are to suggest items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What can colleagues help with

A

Can help you generate more items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is clarity of questions important

A

Respondents may concentrate on opportunity, resources, and ability to different extents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the problem with leading questions

A

Leads to respondents in a particular direction by potentially excusing the behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is reverse wording used

A

To ensure people are reading the questions properly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What type of scales are Yes/No, True/False questions

A

Dichotomous scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What type of scale are “I usually don’t do this at all
I usually do this a little bit
I usually do this a medium amount
I usually do this a lot” questions

A

Frequency of behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Does strongly agree-strongly disagree count as a response format

A

YES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Can numbers be used with a statement as a type of response format

A

Yes, numerical scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Are instructions important

A

YES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do researchers draw participants attention to the key part of the instructions

A

Underlining

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
Which of the following is a dichotomous scale
Strongly Disagree  DisagreeAgree
True/False
 I usually don’t do this at all 
        1         2         3
A

True/false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In the classical theory of error in measurement what does the observer score equal

A

The true score + error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Where does the classical theory of measurement error value come from

A

Standard error of the measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
How does the classical theory help with measurement
it is universal of items
26
Which items correlate to the true score
All items correlate to some extent
27
What is reliability related to
The average correlation between items and test length
28
What are the 4 types of relaibility
Internal External Inter-rater Intra-rater
29
How can internal reliability be measured
Split-half reliability Parallel forms Cronbach’s Alpha KR-20
30
How can external reliability be measured
Test-retest
31
How can inter-rater reliability be tested
Kappa
32
How does split-half reliability measure of internal reliability work
Split items into 2 half, e.g. odds v evens, randomly selecting correlate the total scores for each half
33
What level of correlation in a split half reliability indicates reliability
.80
34
How does parallel forms reliability measure of internal reliability work
Create a large pool of items Randomly divide Administer 2 tests to same participants Calculate the correlation between the two forms
35
What is the problem with parallel forms
Difficult to generate the large number of items required
36
How is Cronbach's alpha measured
Average of all the possible split half estimates
37
To what value does Cronbach alpha measure
Values up to +1.00
38
What figure does Cronbach alpha usually have
+0.70
39
What indicates good reliability in the Croncbach's alpah
The greater the figure
40
What does Kuder-Richardson Formula 20 (KR – 20) measure
internal reliability for measures with dichotomous choices (i.e., 2 choices Yes/No).
41
What leave of values does Kuder-Richardson produce
1.0
42
What figure indicates reliability with Kuder-Richardson
0.70
43
Which test of internal reliability involves the creation of two different versions of a questionnaire?
Parallel forms of reliability
44
How does test-retest reliability measure external reliability
Perform the same survey, with the same respondents, at different points in time.
45
What do the results of test-retest show
The closer the results, the greater the test-retest reliability of the survey.
46
What is used as a measure of the test-retest relaibility in external validity
The correlation coefficient between the two sets of responses is often used
47
Define inter-rater reliability
Inter-rater reliability determines the extent to which two or more raters obtain the same result when coding the same response.
48
What measures can be used for inter-rater reliability
Cohen’s Kappa: Values up to +1.00, larger numbers indicate better reliability, used when there are two raters. Fleiss’ Kappa: An adaptation which works for any fixed number of raters.
49
What does the inter-rater measure
Agreement not accuray
50
What does intra-rater reliability measure
The same assessment is completed by the same rater on two or more occasions
51
How does intra-rater reliability work
Different ratings are then compared generally by means of correlation
52
What is a problem of intra-rater reliability
Since the same individual is completing both assessments, the rater's subsequent ratings are contaminated by knowledge of earlier ratings.
53
What are some of the sources of unreliability
``` Guessing Ambiguous items Test length Instructions Temperature, illness Item order effects Response rate Social desirability ```
54
Which measure of reliability measures stability over time?
Test-retest reliability
55
Which factors can impact validity
``` Faith Face Content Construct Convergent Discriminant Predictive ```
56
Define faith validity
Simply a belief in the validity of an instrument without any objective data to back it up, and the evidence is not wanted
57
Define face validity
If something has face validity, it looks like a test that measures the concept it was designed to measure.
58
Define content validity
The extent to which a measure represents all facets of the phenomena being measured.
59
Define construct validity
Seeks to establish a clear relationship between the construct at a theoretical level and the measure that has been developed.
60
What is convergent validity
That the measure shows associations with measures that it should be related to, e.g., academic vindictiveness should be related to other aspects of vindictiveness; such as a tendency to seek revenge, or spitefulness.
61
What is discriminant validity
That the measure is NOT related to things that it should not be related to.
62
Define predictive validity
Assesses whether a measure can accurately predict future behaviour.
63
Which measure of validity states that a test should represent all facets of the phenomena being measured?
Content validity
64
How is a questionnaire reliable
A questionnaire is reliable if all of the questions in your test are consistently measuring the same underlying concept, and that this remains stable over repeated times that the test is administered.
65
What makes a valid test
A test is valid if it is actually measuring what you intend it to measure.
66
What is necessary but not sufficient for validity
Reliability