week 9 surveys and scales Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

construct

A

an abstract concept, starting point for research. eg self-esteem, intelligence…First create the construct, then try to measure it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

survey vs scale

A

Survey is set of questions. Scale has a measure for answers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

variable

A

Individuals vary on their level of a variable. Variables are concrete representations of a construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

operational definitions

A

specify how we measure a variable. A single operational definition is usually insufficient to fully capture the essence of a construct. usually need a few.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

defining theoretical constructs

A

Use a combination of the following techniques:
brainstorming, nomological net, interviews, theme analysis, theories, literature reviews, direct observation, talking with experts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

adv and disadv of self report measures

A

adv: direct, easy/cheap to administer
disadv: distortion in answers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

reliability

A

refers to the consistency of a test/scale/score. Also the accuracy of a test over time . Also the accuracy of the test over content (internal consistency).
Different raters should also have almost same score.
(3 components:inter-rater reliability, internal consistency and test-retest reliability).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

validity

A

examines whether a test measures what it claims and how well it does it. 3 components to validity:
Face-does a test “look right”?
Crierion-is there agreement between the test score and a criterion?
Predictive-can the test score be used to predict and make an accurate decision?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

methods of administering surveys (adv’s and disav’s)

A
  1. Mail:convenient, easy, less time restricted, but expensive, low response rate, can’t clarify.
  2. internet surveys:convenient,easy, cheap,but low response rate, can’t clarify
  3. Telephone surveys;convenient, can clarify, but more time consuming, low response rate, interviewer bias.
  4. in-person survey; high completion rate, can clarify, but time consuming, interviewer bias.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

nomological net

A

or nomological network=”lawful network”. In order to measure construct validity, must have a network which includes theoretical and empirical frameworks and define how going to measure the construct and show how the frameworks are inter-related.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Likert Scale

A

a special case of a restricted question.Respondents indicate how much they agree/disagree with a statement by having set limited option answers eg 1=always, 5=never. All related question answers are then summed or averaged. Easy to understand nd complete, easy to analyse with statistics. Typically 5-10 categories on a rating scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

response set

A

A response set occurs when a participant uses the same answer for all questions (eg selecting option 4 for each item). It generally shows a disinclination to engage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

six types of validity

A

Face, Criterion and Predictive.
Also:
Construct validity-getting expected correlations between measure and other variables.
Concurrent validity-a new measure matches with an old measure.
Convergent validity- correlation between 2 different ways of measuring the same variable.
Divergent validity-no correlation when measuring different constructs by same method (ie expect as different, that they should not correlate).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

error

A

Possible sources of error in test scores:
observer error
environmental changes
participant changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

non response bias

A

24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

reverse coding

A

ie positive statement “agree=agree but in a negative statement, agree actually means “disagree and must code accordingly

17
Q

Cronbach’s alpha

A

The most commonly used measure of internal consistency. Should range from 0 to 1 (1=consistent).
Can get negative Cronbach’s BUT this means there is a problem with the data(either small sample size or failure to reverse code).
Ideally want Cronbach’s to be 0.7 or greater.
Measures how closely related items are as a group.
(Beware even a poorly functioning scale may if has many items, give a misleadingly reassuring Cronbach’s. Must check by looking if individual items also correlate)

18
Q

Split-half reliability

A

Another measure of internal consistency (consistency within items).
A test score is divided into halves, and the two halves are assessed for correlation. only works for content sampling, not time sampling. There may be biases or faults depending on how the “split” is performed.

19
Q

sensitivity

A

Can the scale items differentiate between different attitudes/characteristics/levels etc? If yes, then is sensitive.

20
Q

item bias

A

Statement/question has a bias due to wording which makes it eg emotional/too personal/inaccessible if don’t know specific knowledge referred to etc.

21
Q

infrequency scale

A

provides a check for random responses. as get deeper into survey, throw one of these in as a test if paying attention.