Exam 4 Flashcards
What is the difference between the control and treatment groups?
The control does not feel the manipulated effect while the treatment group does.
How do we determine causation?
- Covariance: there is a difference between the two groups
- Temporal precedence: can prove that one variable causes the other/occurs after it (rain storm causes gloomy mood)
- Third variable: ensuring the variables do influence one another (internal validity) and controlling for possible third variables
What is the difference between systematic and unsystematic variability.
- Systematic variability: purposefully created variability through the IV
- Example: Half of the participants were tested on their mood when it was rainy (purposefully testing the affect of rain on mood). - Unsystematic variability: unexplained, random variability in the IV groups
- Example: Some participants were tested when it was raining (random rainstorm that is now a confound within an experiment)
What is a design confound?
A unwanted variable that still may influence the study.
- Example: Students who take the SAT on a rainy day may do worse than other students
Define selection effects.
Effects found in a study that were due to a faulty group assignment procedure.
- Example: Completing a political poll outside of a college campus and no where else.
How can we avoid selection effects?
- Random assignment
- Matched-groups design: randomly sort people based on an attribute (IQ, age, gender)
Define the difference between between-groups and within-groups design.
- Between-groups: different groups of participants experience different levels of the IV
- Within-groups: the same group of participants experience all levels of the IV.
What are the two basic types of independent groups design (between-groups)?
- Post-test only design: participants are only tested once
- Example: testing people’s fondness for massages and how likely they would be to go get one - Pretest-posttest design: participants experience an IV once, but are tested on the DV twice
- Example: testing people’s stress level before and after a massage
What is the pro and con of doing a pretest-posttest instead of simply a post-test?
Pro: Able to see if the groups are equal from the beginning
Con: Participants could potentially figure out what the researchers are attempting to find
What are the two basic designs for within-groups design?
- Repeated-measures design: one group are measured after each level of the IV
- Example: Participants intelligence score on a rainy day vs a sunny day. - Concurrent measures design: participants are exposed to all levels of the IV at once
- Example: the study on attachment, babies were exposed to the mom leaving and their reaction were the levels
What are the advantages and disadvantages of a within-groups design?
Advantages:
1. ensures that participants are equal across the board
2. study has more power (easier to know if the significance is legit)
3. fewer participants needed
Disadvantages:
1. Carryover effects, impact of one treatment influencing the reaction to the next
2. Practice effects: getting better with repetition
3. Fatigue effects: loss of interest over time
Define counterbalancing.
Ensuring that the order of the IV does not impact the significance.
- Example: shuffling the questions for an intelligence study for each participant
What are some possible negative influences on a study?
- Confounds
- Weak manipulation
- ‘Noisy’ Measurements (too much unsystematic variance, unexplained variability
How can graphs be misleading?
- Biased scale
- Sneak sampling
- Interpolation (assuming information)
- Extrapolation (using assumptions to make external generalizations)
- Inaccurate values (distorting data)
What are the main types of graphs for displaying data?
- Scatterplots
- Line graphs
- Bar graphs
- Pictorial graphs
- Pie charts
- Word clouds
- Multivariable graphs
What are some threats to internal validity in a study?
**M - Maturation threat **
**R - Regression threat **
S - Selection Bias
M - Mortality/attribution threat
I - Instrumentation threats
T - Testing threat
H - History threat
E - extra ones: observer bias, researcher bias, demand characteristics (participants who figured out the hypothesis and change their behavior due to this), placebo effects, situation noise
Define the Maturation threat.
Definition: Can’t prove the differences didn’t occur spontaneously
Prevention: A comparison group
Define the regression effect.
Definition: When individuals scores begin extreme but end up leveling out to average
**Prevention: **Through sorting out extreme scores/bad participants and proper random assignment
Define mortality/attribution effect.
Definition: as a study goes on, people drop out of the study
**Prevention: **Through removing their scores from previous data
Define instrumentation threats.
**Defintion: **The measurement instrument changes over time due to repeated use
**Prevention: **Through post-test design only, retrain/recode measurement tools
Define testing threat.
Definition: participants behavior/scores change due to the study being repeated measures
Prevention: through using between-subjects, different forms of the test (Form A vs Form B of an exam), or a comparison group
Define history threat.
Definition: external event that happens to everyone in the study
Prevention: through a comparison group
Define the observer/researcher bias.
Observer: observers recording what they want to see instead of what actually is occurring
**Researcher: ** researcher finds/searches for what they want to find
Define demand characteristics.
Definition:Participants who were able to guess the hypothesis
Prevention: double-blind study, blind design/masked design
Define situation noise.
Other things occurring at the time of the study that may influence its results
- Example: taking a test in a room with multiple researchers who are distracting, the noise of the air conditioning, and the distant yell of someone outside
Define null effect.
- the IV is not affecting the DV: there is truly no connection between variables, no systematic variability
- the study has a design flaw: tasks are too easy, study had issues, measurement tool was bad, etc. No systematic variability/high unsystematic variability
What are some ways that systematic variability is reduced/not there?
- Weak manipulation: operationalization was poor, change in IV was not substantial enough
- Insensitive measures: measure didn’t attempt to take in more variability, a 3 choice likert scale vs a ten choice scale
What is the difference between the ceiling and floor effects?
Ceiling Effect: Scores all fall at the higher end of a test, could be due to the measurement being too easy
Floor Effect: Scores that all fall at the lower end, test could be too difficult
What is a possible cause of low systematic variability in a study?
Confounds acting in reverse, a confound counteracts the effect of the IV on the DV
- Example: A study on the benefit of taking a test-prep class (IV), however, participant’s stress (DV) could be high since they know they are taking a test
What are some causes of high unsystematic variability?
- Noise: unexplained variance
- Measurement Error
- Individual differences
- Situation noise
How does too much unsystematic variability affect the effect size?
Too much unsystematic variability results in a difficult to detect effect
What are error bars?
The visual showing of standard deviation
How do we prevent individual differences influencing a study?
- Change the design to within-groups
- Perform a matched samples design
How do we prevent too much variance?
Addition of more participants
Why are null studies still important?
It is a progression of knowledge. Whether finding no significance was due to bad study design or the variables simply not being related, researchers are able to adapt and progress.
What is statistical power and how do you increase it?
The statistical significance of a study. You can increase this through preventing random error, increasing the sample size, and preventing other threats (mortality, attribution, maturation) to the study.
How does adding more variables onto a factorial study affect the results?
Gives more information about the study and allows researchers to compare between many possible confounds
- Example: A study on childhood hyperactivity, researchers may look at the relationship between hyperactivity and sugar, hours excercised, whether they are in sports, and whether the school has recess.
Define crossover interaction.
The variables visually crossover in a graph, IV’s react in opposition
- Example: ice cream sales and times christmas songs are played in the summer
Define spreading interaction.
The variables are shaped like an angle, one IV has effect while the other does not
Define participant variable.
A variable that cannot be manipulated by the researcher
- Example: age, gender, sexuality, or religion
What are the types of factorial designs?
- Independent-groups factorial (Between-groups factorial)
- Repeated-measures factorial (Within-groups factorial)
- Mixed factorial design; one IV is manipulated as between groups and the other IV is manipualted within groups
- Example: a study finding the different relaxation levels depending on the music they are listening to. Further, the participants are divided up into different times to listen to each music type (within-subjects factor) and rate their relaxation levels (between-subjects factor)