Exam 3 Flashcards
between-subject designs
an experiment in which each participant is tested in one condition
(e.g. posttest only, pretest/posttest)
posttest only design (between-subject designs)
-participants are randomly assigned to IV groups and are tested on the DV just once
pretest/posttest design (between-subject designs)
participants are randomly assigned to IV groups and tested on the DV before AND after the manipulation
types of control conditions
- no-treatment control condition
- placebo
no-treatment control condition (types of control conditions)
participants receive no treatments- not even a placebo
placebo (types of control conditions)
a treatment that lacks any active ingredient or element that should make it effective
placebo effect
individuals believe there is an effect when clinically there is none (psychological effect)
advantages of between-group designs
- no transfer across conditions
- may be shorter in duration
- some treatments are designed to have longer-lasting effects so participants cannot always do the alternate treatment
disadvantages of between-group designs
- participants in your groups are not equivalent which introduces more variability
- more participants required
within-subjects experiment
an experiment in which each participant is tested in all conditions
types of within-subjects designs
- repeated-measures design
- concurrent-measures design
repeated-measures design (types of within-subjects designs)
participants are measured on the DV more than once (after exposure to each level of the IV)
concurrent-measures design (types of within-subjects designs)
participants are exposed to all levels of the IV at roughly the same time, and a single measurement is the DV
advantages of within-group designs
- participants in your groups are equivalent because they are the same participants and serve as their own controls
- require fewer participants than other designs
disadvantages of within-group designs
- potential carryover/order effects
- might not be practical or possible
- experiencing all levels of the IV changes the way participants act (demand characteristics)
carryover effects
an effect of being tested in one condition on participants’ behavior in later conditions
types of carryover effects
- practice effect
- fatigue effect
- context effect
practice effect (carryover effects)
participants perform better on a task in later conditions because they have a chance to practice
fatigue effect (carryover effects)
participants perform worse on a task in later conditions because they have become tired or bored
context effect (carryover effects)
being an initial condition affects how participants perceive or interpret their subsequent tasks
solution to carryover effects
counterbalancing
counterbalancing (solution to carryover effect)
systematically varying the order of conditions across participants
- controls the order of conditions
- makes it possible to detect carryover effects
construct validity
how well does the measure describe the construct of interest
DV: how well were they measured?
IV: how well were they manipulated?
external validity
how well does the sample represent the broader population and contexts?
- generalizing to other people
- generalizing to other situations
statistical validity
how well do the numerical results (statistics) actually match the authors’ interpretation of their results?
- how large is the effect?
- how precise is the estimate? (95% CI)
internal validity
how sure are we that the variables’ relationship is not due to other factors?
5 principles of APA ethics code
- beneficence and nonmaleficence
- fidelity and responsibility
- integrity
- justice
- respect for people’s rights and dignity
beneficence and nonmaleficence (5 principles of APA ethics code)
research will benefit society without causing suffering
(e.g. violating ethics: bobo doll experiment- children may have had long-term distress or behavioral changes)
fidelity and responsibility (5 principles of APA ethics code)
establish trust and behave responsibility
(e.g. violating ethics: Harvard scholar Marc Hauser falsified data and inaccurately represented research methods)
integrity (5 principles of APA ethics code)
accuracy, truth, and honesty
(e.g. violating ethics: Milgram Obedience study, not properly debriefed and did not know there was actually no shocks administered)
types of deception used in studies
omission- withholding details of the study from participants
commission- lying to participants
researchers must ______ when they deceive participants
debrief- during debriefing sessions, the researchers explain why deception was used and the nature of the deception
justice (5 principles of APA ethics code)
who bears the burden of research participation?
-treat groups of people fairly
- consider sampling and biases
(e.g. violating ethics: Tuskegee Syphilis Study: the participants were a targeted, disadvantaged social group)
respect for people’s rights and dignity (5 principles of APA ethics code)
maintain informed consent and prevent coercion
(e.g. violating ethics: Stanford Prison Ethics, participants were intentionally not informed that they would be arrested which was a breach of the contract and they were not allowed to withdraw at will)
consent form
a form that participants sign as a part of the informed consent process
describes the procedure, the risks and benefits, participants’ right to withdraw from the study and any confidentiality issues
animal research
- legal protection for lab animals
- ethically balancing animal welfare, animal rights, and animal research
(e.g. violating ethics: Surrogate Mother Study, extreme harm to intelligent animals)
null hypothesis testing
a formal approach to deciding whether a sample is:
A) due to chance (the null hypothesis)
B) reflects a real relationship in the population (the alternative hypothesis)
how should you report null results? what should you conclude?
transparently
conclude:
1. check for obscuring factors
2. if no obscuring factors, just report the result
null effects…
- may be published less often
- can be just as interesting as significant results
- are becoming increasingly published
- are less likely to be reported in the popular media than other results
publication bias
a bias among researchers and editors in favor of publishing statistically significant results and against publishing nonsignificant results
file drawer problem
when statistically nonsignificant results are stashed away in researchers’ file drawers
why didn’t the IV make a difference in a null effect?
- not enough between-groups difference
- within-groups obscured group differences
- there really is no difference
not enough between-groups difference (null effects)
- weak manipulations
- insensitive measures
- ceiling and floor effects
- design confounds
weak manipulations (not enough between-groups difference)
the manipulation was not enough to cause a difference
insensitive measures (not enough between-groups difference)
researchers haven’t operationalized the DV with enough sensitivity to capture the potential change
ceiling and floor effect (not enough between-groups difference)
ceiling effect- the participants’ scores on the DV are clustered at the high end (e.g. when giving college students a simple addition test)
floor effect- the participants’ scores on the DV are clustered at the low end
design confounds (not enough between-groups difference)
additional unintended influences affect the results
how can within-groups variability obscure the group difference?
- measurement error
- individual difference
- situation noise
measurement error (within-groups variability)
any factor that can inflate or deflate a person’s true score on the DV
individual differences (within-groups variability)
individual differences spread out scores within each group
situation noise (within-groups variability)
any kind of external distraction that could cause variability within-groups that obscures between-groups differences
replication
the result of a study has been repeated
types of replication
- direct replication
- conceptual replication
- replication-plus-extension
direct replication
the original study is repeated as closely as possible to determine whether the original effect is found in the new data
conceptual replication
researchers explore the same research question but use different procedures
operationalizing the variables differently
replication-plus-extension
replicate the original study but add variables to test additional questions
scientific literature
a series of related studies conducted by different researchers who have tested similar variables
meta-analysis
a statistical analysis that yields a quantitative summary of a scientific literature
meta-analysis limitations
file drawer problem- overestimate the true effect size because null effects (or opposite effects) have not been included in the analysis
heuristic
mental shortcut, can result in a cognitive bias
cognitive biases
drawing an incorrect conclusion in certain situations based on the way the brain is set up to process info
confirmation bias
a bias to seek info that will confirm a rule and not to seek info that would refute the rule
e.g. you only look for info in the data that confirms your hypothesis
availability heuristic
we judge or events as more likely, common, or frequent if they are easier to retrieve from memory
e.g. interpreting results with ideas or theory instead of considering alternate explanations
logical fallacies
error in reasoning that undermines an argument
logical fallacies…
- appeal to authority
- false induction/non-sequitor
- false dichotomy
- observational selection
appeal to authority (logical fallacies)
rely on an expert instead of making a full argument
e.g. in the discussion, relying on the work of others instead of making your own argument for your findings
false induction/non-sequitor (logical fallacies)
erroneously present things as causal
e.g. present correlation results in the discussion section as ‘causing’ or ‘affecting’ the outcome
false dichotomy (logical fallacies)
issue presented as either/or
e.g. presenting results in the discussion as overly simplified and without nuance
observational selection (logical fallacies)
only draws attention to positive evidence or observations
e.g. only report significant results instead of all results for the research questions or fail to fully discuss the interpretation of results as a whole
cognitive biases vs logical fallacies
the way brain processes info vs errors in reasoning
how do scientists try to avoid cognitive biases and logical fallacies
an attitude of skepticism:
consider alternatives and search for evidence before accepting that a belief or claim is true
tolerance for uncertainty:
withholding judgement about whether a belief or claim is true when there is insufficient evidence for it
what percentage of the class sample is male?
23.7%
is growth mindset associated with procrastination?
no
look at Sig. 0.790
0.790 is more than 0.05 so it is not significant and therefore not correlated/associated
is self esteem associated with procrastination?
yes
look at Sig. 0.006
0.006 is more than 0.01 (**) so it is significant and therefore correlated/associated
is self esteem associated with procrastination?
yes
the lines seem to be close to a line, which means they are most likely associated/correlated
does number of traumatic experiences predict strength of a secure attachment style?
yes
sig. is .031
.031 is less than 0.05, meaning there is a prediction
does number of traumatic experiences predict strength of a secure attachment style?
weak negative association
do men and women have different levels of trait agreeableness?
no
two-sided p= 0.948
0.948 is bigger than 0.05, so no
do men and women have different levels of trait neuroticism?
yes
look at two sided p= <0.001
<0.001 is bigger than 0.05, so yes
do men and women have different levels of trait neuroticism?
yes
just look bro
does strategy affect digit span memory?
no
two sided p= 0.192
0.192 is bigger than 0.05, so no