delkurs 1: Experimentdesign Flashcards
sensitivity
Refers to the likelihood in an experiment that the effect of an independent variable will be detected when that variable does, indeed, have an effect; sensitivity is increased to the extent that error variation is reduced
(e.g., by holding variables constant rather than balancing them).
The ability to detect the effect of the independent variable even if the effect is a small one.
An experiment is more sensitive when there is less variability in participants’ responses within a condition of an experiment = less error variation
independent groups design (+/-)
Each separate group of subjects in the experiment
represents a different condition as defined by the level of the independent variable.
random group design (+/-)
The most common type of independent groups design in which subjects are randomly assigned to each group such that groups are considered comparable at the start of the experiment.
block randomization
The most common technique for carrying out random assignment in the random groups design; each block includes a random order of the conditions, and there are as many blocks as there are subjects in each condition of the experiment.
Block randomization can also be used to order the conditions for each participant in a complete design.
It is effective in balancing practice effects.
In general: the number of blocks in a block-randomized schedule is equal to the number of times each condition is administered, and the size of each block is equal to the number of conditions in the experiment.
threats to internal validity
Possible causes of a phenomenon that must be controlled so a clear cause - effect inference can be made.
mechanical subject loss
Occurs when a subject fails to complete the experiment because of equipment failure or because of experimenter error.
selective subject loss
Occurs when subjects are lost differentially across the conditions of the experiment as the result of some characteristic of each subject that is related to
the outcome of the study.
experimenter effects
Experimenters’ expectations that may lead them to treat subjects differently in different groups or to record data in a biased manner.
placebo control group (+/-)
Procedure by which a substance that resembles a drug or other active substance but that is actually an inert, or inactive, substance is given to participants.
Acts as a control group ? –> kolla boken
double-blind procedure
Both the participant and the observer are kept unaware (blind) of what treatment is being administered.
EX: in an experiment with a placebo control group - neither the observer nor the participant knows who gets the placebo (fake medicin) and who receives the authentic medicin.
replication
Repeating the exact procedures used in an experiment to determine whether the same results are obtained.
The most effective way to test the validity of an experiment ? –> kolla boken
- effect size
2. Cohen’s d
- Index of the strength of the relationship between the independent variable and dependent variable that is independent of sample size.
- A frequently used measure of effect size in which the difference in means for two conditions is divided by the average variability of participants’ scores (within-group standard deviation).
Based on Cohen’s guidelines, d values of .20, .50, and .80 represent small, medium, and large effects, of an independent variable.
meta-analysis
Analysis of results of several (often, very many) independent experiments investigating the same research area.
The measure used in a meta-analysis is typically effect size.
- Null hypotesis
2. Null hypothesis significance testing (NHST)
- Assumption used as the first step in statistical inference whereby the independent variable is said to have had no effect.
- A procedure for statistical inference used to decide whether a variable has produced an effect in a study. NHST begins with the assumption that the variable has no effect (null hypothesis), and probability theory is used to determine the probability that the effect (e.g., a mean difference between conditions) observed in a study would occur simply by error variation (“chance”).
If the likelihood of the observed effect is small (level of significance), assuming the null hypothesis is true, we infer the variable produced a reliable effect (statistically significant).
statistically significant
When the probability of an obtained difference in an experiment is smaller than would be expected if error variation alone were assumed to be responsible for the difference, the difference is statistically significant.
- validity
- internal validity
- external validity
- The “truthfulness” of a measure; a valid measure is one that measures what
it claims to measure. - Degree to which differences in performance can be attributed unambiguously to an effect of an independent variable, as opposed to an effect of some other (uncontrolled) variable; an internally valid study is free of confounds.
- The extent to which the results of a research study can be generalized to different populations, settings, and conditions.