RIP final Flashcards
How to proceed with answering the question: Is there a difference between the mean resting heart rate of men and women?
The first step is calculating the difference between the two means. We must transform this distance into a relative distance (t-statistic). It allows us to compare the difference to a standardized distribution (the t-distribution). We calculate the test statistic using the formula for t. When we have the value of t, we use p-value to measure how extreme the difference is.
What is the formula for the t-statistic?
observed difference/standard error for the difference in the two means
(M1 - M2) / SE(M1 - M2)
Once we have the value of t, what do we use to measure how extreme the difference is?
p-value
conditions of causality
- covariance
- temporal precedence
- internal validity
internal validity
Alternative explanations for the relationship should be ruled out
randomized experiment
A research design where:
▪by randomization, groups can be assumed to be similar
▪one variable is manipulated(varied) by the researcher
▪the researcher measures the effect of this manipulation on another variable (the outcome)
confounding variable
A second variable that happens to vary systematicallyalong with the intended independent variable. This variable is therefore an alternative explanation for the results
internal validity
asks if groups were comparable at the beginning of the experiment, with respect to the dependent variable and other dependent variables (observed and unobserved). If, for some reason, the groups turn out to be not comparable at the start of the experiment, we speak of a selection effect
selection effect
Crucial question: how were the groups created. To reduce selection effects, groups must be formed using random assignment. for some reason, the groups turn out to be not comparable at the start of the experiment, we speak of selection effect.
goal of random assignment
making sure that: the mean and variance in scores, on all variables, measured and unmeasured, are similar for both groups at the onset of the study
randomization issues
contamination
contamination in randomization
▪Participants in the experimental group communicate with participants in the control group
▪Participants do not adhere to the treatment
▪Influence from researcher(s)
PICO
The identifier of an experimental research question
Population
Intervention
Comparison
Outcome
what do researchers use when comparing mean scores of two independent groups?
independent sample t test
standard error for difference in means
contains the group sizes (n1and n2) and spread in scores in both groups (SD1and SD2)
With the t-test we consider the relative difference between the groups, using:
*The mean difference: M1–M2
*The spread in scores in both groups:SD1and SD2
*The group sizes: n1 and n2
the idea behind the test statistic t
When a lot of samples are drawn from a population in which H0is true, The difference between the sample means will often be near zero. So, t will often be near zero, too. Values of t that are far from zero will be found less often.
what is the standard error of t dependent on?
Group sizes (n1and n2) *Variation in scores in both groups (SD1and SD2)
as standard deviation increases, standard error
also increases
as n increases, standard error
decreases
overall the test statistic is dependent on
- relative difference in means
- standard deviation pooled (weighted average of sd in sample 1 and sd in sample 2)
- and sample size per group
a larger diference in means what for the t value
larger t
more variation in scores means what for the t value
smaller t
larger samples means what for the t value
larger tr
randomization
- key of true experiment
- observed and unobserved factors are equally likely in both groups
- transparent, reproducible
- allows causal claims
between subject design
When participants are divided into different groups and each groups receives different treatment. The data is then compared between groups
within subject design
When all participants receive all different treatments (one after the other, possibly randomized in order). We first compare the data within each person
how does a pretest-posttest design compare to posttest
can serve as a randomization check, correction for differences, and can track changes. in just a posttest design, we would not know if/how the groups differed at the beginning.
disadvantage of the pretest-posttest design
learning effect
solomon four group design and advantages/disadvantages
both prettest-posttest and just posttest. can solve unequal groups at the beginning and check for learning effect. however, can be highly costly.
repeated measures design
where the same participants are measured multiple times under different conditions or at different time points. This allows researchers to examine changes within individuals, reducing variability and the need for a large sample size.
counterbalanced measures design
A research design used to control for order effects in repeated measures studies. Participants experience all conditions, but the order of conditions is varied across participants to prevent biases from practice, fatigue, or carryover effects.
quasi-experiment
Research designs that evaluate the effect of an intervention or treatment without random assignment. Instead, groups are naturally formed or pre-existing, making them useful in real-world settings where randomization isn’t feasible.
interrupted time series design
A quasi-experimental design that measures an outcome variable repeatedly over time, both before and after an intervention or event (the “interruption”). It evaluates changes in trends or levels caused by the intervention, making it useful for analyzing the effects of policies, treatments, or external events.
field experiment
An experiment with a close simulation of the conditions under which the process under study occurs or in a natural settin
threats to internal validity
design confounds
selection effect
design confounds
A second variable that happens to vary SYSTEMATICALLY along with the intended independent variable
▪This variable is therefore an alternative explanation for the results
threats to internal validity in experimental design
▪Design confounds
▪Selection effect
▪Contamination
▪Learning effect
▪Maturation
▪History
▪Regressing to the mean
▪Attrition
▪Testing
▪Instrumentation
threats to internal validity in all research
▪Observer bias
▪Demand characteristics
▪Placebo effect
Observer bias
When the researcher has certain expectations and is influenced by this in assessing the participants/ interpreting the result
Deman characteristics
When the participants realize what the study is for and therefore start to behave differently (in the expected direction
Placebo effect
When participants make progress because they believe they are receiving an effective treatment
Maturation
Is it the manipulation or the development (aging, maturing) that caused the differences?
Observed differences between the pre- and post-measurement could arise from natural developments of the participants, when participants’ characteristics change as part of a natural process.
History threats
Is it the manipulation or external events causing the differences?
Not only natural changes of participants are a source of influence, but external events as well - events that are not necessarily related to the study.
Regressing threats
Is it the manipulation or the natural “shifting” that caused the differences?
Regressing to the mean can occur when the participants show extreme values (on average) at the start of the experiment. At a later time, values are expected to be shifted towards the ‘normal’, less extreme, mean value.
Attrition threats
Is it the manipulation or the drop-out of a group of participants that caused the differences?
When participants drop out during a study, the outcome can be affected by this. This is primarily a problem when the people that quit the study are different from the people that do not.
Instrumentation threats
Is it the manipulation or the new instrument that caused the differences?
When the instrument measuring the dependent variable changes during the experiment, the results are affected.