lecture 4 Flashcards

1
Q

construct validity of surveys and polls

A

choosing question formats
writing well worded questions
encouraging accurate responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

diff between likert scale and semantic differential format

A

likert scale has anchor words

semantic does not, more so phrases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

pros/cons of open ended questions

A

pros
- lots of rich info

cons
- can be time consuming and hard to code and categorize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

how do negatively worded questions impact studies

A

they decrease construct validity by adding to their cognitive load

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how to control for effects of question order

A

make diff versions of the survey w questions in diff order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are response sets and how do they impact study

A

a way of answering questions in the same way

weaken construct validity since ppl arent saying what they really think

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

two solutions to fence sitting

A

get rid of neutral answer options

forced choice format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

one way to avoid socially desirable responding

A

remind participants they are anonymous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is observational research commonly used to validate?

A

frequency claims

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

findings on study observing where observers and caregivers look

A

babies look at toys more
caregivers look at babies and toys equally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

findings on study observing families in the evening

A

emotional tone was slightly positive

kids more likely to complain abt food taste
parents more likely to talk abt how healthy food is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

observer bias

A

when observers see what they expect to see

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

observer effects

A

when participants confirm observer expectations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe the Clever Hans study

A

Clever Hans was a horse who could add, subtract, multiply, and divide at least as well as a fifth-grader. Hans was tutored in simple mathematics by his trainer and would tap his hoof to count. Moreover, the trainer was not the only person who could get Hans to do this type of math. Researchers suspected that Hans was picking up on subtle nonverbal cues from his questions, so they had one person whisper the first part of the problem in Hans’s ear and a second person whisper the second part of the math problem. Neither questioner knew the correct answer because they only knew the part of the problem they whispered, and in fact, Hans could not answer these math questions. Yes, Hans was clever, but not in math. He was skilled at detecting subtle nonverbal cues (such as changes in breathing, changes in posture, and furrowed eyebrows).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

bright vs dull rats study

A

Psychology undergraduates were given five rats and told to see how long it took for the rats to learn to run a maze (Rozenthal & Fode, 1963) . Each student was given a randomly selected group of rats. Half of the students were told that their rats were bred to be “maze-bright” and half were told they were bred to be “maze-dull.” The rats were actually all genetically similar, but the “maze-bright” rats ran the maze faster each day with fewer mistakes, whereas the “maze-dull” rats did not improve their performance over several days of testing. The study demonstrated that sometimes observers’ expectations can influence the behavior of those they’re observing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how to control for observer bias and effects

A

masked design
- observers dont know what conditions participants are assigned to and are unaware of what study is about

having a codebook

using multiple observers

17
Q

true or false: just because a measure is reliable, that doesn’t mean it’s valid.

A

tru

18
Q

reactivity

A

when people change their behavior in some way when they know that someone else is watching them.

19
Q

solutions to reactivity

A

blend in
wait it out
measure the behaviours results (like footsteps in museum)

20
Q

2 ways to have a biased sample

A

convenience sampling
self-selection (only using those who volunteer)

21
Q

best way to get a representative sample

A

probability sampling

22
Q

simple random sampling

A

obtained by putting every member’s name of your population of interest in a pool and then randomly selecting a predetermined number of names from the pool to include in your sample

23
Q

Systematic sampling example

A

If I wanted to systematically sample this class, I could roll two dice. Let’s say one lands on five and the other lands on three. I would start at the fifth person in the class and then choose every third person until the sample reached the desired size.

24
Q

Cluster sampling + example

A

Clusters of participants within a population of interest are randomly selected, and then all individuals in each selected cluster are used.

For example, suppose your population of interest is preschools within the city in which you live. If a researcher wanted to randomly sample high school students in the state of Pennsylvania, for example, he could start with a list of the 952 public high schools (clusters) in that state, randomly select 100 of those high schools (clusters), and then include every student from each of those 100 schools in the sample.

25
Q

Multistage sampling + example

A

Two random samples are collected. Stage 1: A random sample of clusters is selected from your population of interest. Stage 2: From those selected clusters, a random sample of people is chosen.

For example, a researcher starts with a list of high schools (clusters) in the state and selects a random 100 of those schools. Then, instead of selecting all students at each school, she selects a random sample of students from each of the 100 selected schools.

26
Q

Stratified random sampling + example

A

A multistage technique in which the researcher selects specific demographic categories (such as race or gender) and then randomly selects individuals from each of the categories. For example, a group of researchers might want to be sure their sample of 1,000 Canadians includes people of South Asian descent in the same proportion as in the Canadian population (which is 4%). Thus, they might have two categories (strata) in their population: South Asian Canadians and other Canadians.

27
Q

Oversampling + example

A

Variation of stratified random sampling in which a researcher over represents one or more groups.

For example, perhaps a researcher wants to sample 1,000 people, making sure to include South Asians in the sample. Maybe the researcher’s population of interest has a low percentage of South Asians (say, 4%). Because 40 individuals may not be enough to make accurate statistics, the researcher decides that of the 1,000 people he samples, a full 100 will be sampled at random from the Canadian South Asian community.

28
Q

random sampling + how it impacts study

A
  • Creating a sample using some random method so that each member of the population of interest has an equal chance of being in the sample
  • This method increases external validity
29
Q

random assignment + how it impacts study

A
  • Used only in experimental designs to assign participants to groups at random (usually a treatment group and a comparison group)
  • This method increases internal validity
30
Q

list the types of biased/unrepresentable sampling methods

A

convenience sampling
snowball sampling
purposive sampling
quota sampling

31
Q

purposive sampling + example

A

used when you want to study certain kinds of people, so you only recruit those types of participants.

For example, if you wanted to recruit smokers, you might recruit participants from a tobacco store.

32
Q

snowball sampling + example

A

a variation on purposive sampling in which participants are asked to recommend other participants for the study.

For example, for a study on coping behaviors in people who have Crohn’s disease, for example, a researcher might start with one or two who have the condition, and then ask them to recruit people from their support groups. Each of them might, in turn, recruit one or two more acquaintances, until the sample is large enough. Snowball sampling is unrepresentative because people are recruited via social networks, which are not random.

33
Q

quota sampling + example

A

similar to stratified random sampling; the researcher identifies subsets of the population and then sets a target number (i.e., a quota) for each category in the sample. Then she uses nonrandom sampling until the quotas are filled.

For example, you would like to have 20 college freshman, 20 sophomores, 20 juniors, and 20 seniors in your sample. You know some people in each of these categories but not 20 of each, so you might use snowball sampling until you meet your quota of 20 in each subset.