Exam 4 Flashcards

1
Q

What is the difference between the control and treatment groups?

A

The control does not feel the manipulated effect while the treatment group does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do we determine causation?

A
  1. Covariance: there is a difference between the two groups
  2. Temporal precedence: can prove that one variable causes the other/occurs after it (rain storm causes gloomy mood)
  3. Third variable: ensuring the variables do influence one another (internal validity) and controlling for possible third variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the difference between systematic and unsystematic variability.

A
  1. Systematic variability: purposefully created variability through the IV
    - Example: Half of the participants were tested on their mood when it was rainy (purposefully testing the affect of rain on mood).
  2. Unsystematic variability: unexplained, random variability in the IV groups
    - Example: Some participants were tested when it was raining (random rainstorm that is now a confound within an experiment)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a design confound?

A

A unwanted variable that still may influence the study.
- Example: Students who take the SAT on a rainy day may do worse than other students

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define selection effects.

A

Effects found in a study that were due to a faulty group assignment procedure.
- Example: Completing a political poll outside of a college campus and no where else.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can we avoid selection effects?

A
  1. Random assignment
  2. Matched-groups design: randomly sort people based on an attribute (IQ, age, gender)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define the difference between between-groups and within-groups design.

A
  1. Between-groups: different groups of participants experience different levels of the IV
  2. Within-groups: the same group of participants experience all levels of the IV.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two basic types of independent groups design (between-groups)?

A
  1. Post-test only design: participants are only tested once
    - Example: testing people’s fondness for massages and how likely they would be to go get one
  2. Pretest-posttest design: participants experience an IV once, but are tested on the DV twice
    - Example: testing people’s stress level before and after a massage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the pro and con of doing a pretest-posttest instead of simply a post-test?

A

Pro: Able to see if the groups are equal from the beginning
Con: Participants could potentially figure out what the researchers are attempting to find

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the two basic designs for within-groups design?

A
  1. Repeated-measures design: one group are measured after each level of the IV
    - Example: Participants intelligence score on a rainy day vs a sunny day.
  2. Concurrent measures design: participants are exposed to all levels of the IV at once
    - Example: the study on attachment, babies were exposed to the mom leaving and their reaction were the levels
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the advantages and disadvantages of a within-groups design?

A

Advantages:
1. ensures that participants are equal across the board
2. study has more power (easier to know if the significance is legit)
3. fewer participants needed
Disadvantages:
1. Carryover effects, impact of one treatment influencing the reaction to the next
2. Practice effects: getting better with repetition
3. Fatigue effects: loss of interest over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define counterbalancing.

A

Ensuring that the order of the IV does not impact the significance.
- Example: shuffling the questions for an intelligence study for each participant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some possible negative influences on a study?

A
  1. Confounds
  2. Weak manipulation
  3. ‘Noisy’ Measurements (too much unsystematic variance, unexplained variability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can graphs be misleading?

A
  1. Biased scale
  2. Sneak sampling
  3. Interpolation (assuming information)
  4. Extrapolation (using assumptions to make external generalizations)
  5. Inaccurate values (distorting data)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the main types of graphs for displaying data?

A
  1. Scatterplots
  2. Line graphs
  3. Bar graphs
  4. Pictorial graphs
  5. Pie charts
  6. Word clouds
  7. Multivariable graphs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some threats to internal validity in a study?

A

**M - Maturation threat **
**R - Regression threat **
S - Selection Bias
M - Mortality/attribution threat
I - Instrumentation threats
T - Testing threat
H - History threat
E - extra ones: observer bias, researcher bias, demand characteristics (participants who figured out the hypothesis and change their behavior due to this), placebo effects, situation noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define the Maturation threat.

A

Definition: Can’t prove the differences didn’t occur spontaneously
Prevention: A comparison group

18
Q

Define the regression effect.

A

Definition: When individuals scores begin extreme but end up leveling out to average
**Prevention: **Through sorting out extreme scores/bad participants and proper random assignment

19
Q

Define mortality/attribution effect.

A

Definition: as a study goes on, people drop out of the study
**Prevention: **Through removing their scores from previous data

20
Q

Define instrumentation threats.

A

**Defintion: **The measurement instrument changes over time due to repeated use
**Prevention: **Through post-test design only, retrain/recode measurement tools

21
Q

Define testing threat.

A

Definition: participants behavior/scores change due to the study being repeated measures
Prevention: through using between-subjects, different forms of the test (Form A vs Form B of an exam), or a comparison group

22
Q

Define history threat.

A

Definition: external event that happens to everyone in the study
Prevention: through a comparison group

23
Q

Define the observer/researcher bias.

A

Observer: observers recording what they want to see instead of what actually is occurring
**Researcher: ** researcher finds/searches for what they want to find

24
Q

Define demand characteristics.

A

Definition:Participants who were able to guess the hypothesis
Prevention: double-blind study, blind design/masked design

25
Q

Define situation noise.

A

Other things occurring at the time of the study that may influence its results
- Example: taking a test in a room with multiple researchers who are distracting, the noise of the air conditioning, and the distant yell of someone outside

26
Q

Define null effect.

A
  1. the IV is not affecting the DV: there is truly no connection between variables, no systematic variability
  2. the study has a design flaw: tasks are too easy, study had issues, measurement tool was bad, etc. No systematic variability/high unsystematic variability
27
Q

What are some ways that systematic variability is reduced/not there?

A
  1. Weak manipulation: operationalization was poor, change in IV was not substantial enough
  2. Insensitive measures: measure didn’t attempt to take in more variability, a 3 choice likert scale vs a ten choice scale
28
Q

What is the difference between the ceiling and floor effects?

A

Ceiling Effect: Scores all fall at the higher end of a test, could be due to the measurement being too easy
Floor Effect: Scores that all fall at the lower end, test could be too difficult

29
Q

What is a possible cause of low systematic variability in a study?

A

Confounds acting in reverse, a confound counteracts the effect of the IV on the DV
- Example: A study on the benefit of taking a test-prep class (IV), however, participant’s stress (DV) could be high since they know they are taking a test

30
Q

What are some causes of high unsystematic variability?

A
  1. Noise: unexplained variance
  2. Measurement Error
  3. Individual differences
  4. Situation noise
31
Q

How does too much unsystematic variability affect the effect size?

A

Too much unsystematic variability results in a difficult to detect effect

32
Q

What are error bars?

A

The visual showing of standard deviation

33
Q

How do we prevent individual differences influencing a study?

A
  1. Change the design to within-groups
  2. Perform a matched samples design
34
Q

How do we prevent too much variance?

A

Addition of more participants

35
Q

Why are null studies still important?

A

It is a progression of knowledge. Whether finding no significance was due to bad study design or the variables simply not being related, researchers are able to adapt and progress.

36
Q

What is statistical power and how do you increase it?

A

The statistical significance of a study. You can increase this through preventing random error, increasing the sample size, and preventing other threats (mortality, attribution, maturation) to the study.

37
Q

How does adding more variables onto a factorial study affect the results?

A

Gives more information about the study and allows researchers to compare between many possible confounds
- Example: A study on childhood hyperactivity, researchers may look at the relationship between hyperactivity and sugar, hours excercised, whether they are in sports, and whether the school has recess.

38
Q

Define crossover interaction.

A

The variables visually crossover in a graph, IV’s react in opposition
- Example: ice cream sales and times christmas songs are played in the summer

39
Q

Define spreading interaction.

A

The variables are shaped like an angle, one IV has effect while the other does not

40
Q

Define participant variable.

A

A variable that cannot be manipulated by the researcher
- Example: age, gender, sexuality, or religion

41
Q

What are the types of factorial designs?

A
  1. Independent-groups factorial (Between-groups factorial)
  2. Repeated-measures factorial (Within-groups factorial)
  3. Mixed factorial design; one IV is manipulated as between groups and the other IV is manipualted within groups
    - Example: a study finding the different relaxation levels depending on the music they are listening to. Further, the participants are divided up into different times to listen to each music type (within-subjects factor) and rate their relaxation levels (between-subjects factor)