Stats Flashcards
External validity
Generalizability
Internal validity
Causality
Construct validity
Did we measure the right thing
Random assignment
Internal validity - randomly assign people to different groups (test - control)
Random selection
External validity - select randomly from a population/sampling frame
ANOVA
Analysis of Variance
Main effect
One thing has an effect on the other
Interaction effect
One variable is dependent on the other, a combination of things has an influence on sth else
Reliability of a measure
The extent to which it is free of random error (fluctuating errors, such as amount of sleep before a test)
Validity of a measure
The ectent to which it is free of systematic error (dyslexia)
Mean
Gemiddelde, deel + elkaar en dan gedeeld door geheel
Mode
Getal dat het meest voorkomt
Median
Ordinal: the middle number
IQR
Ordinal: Q3 - Q1
Range
Hoogste min laagste
Ordinal
categories, rank order, strongly agree
Nominal
Categories, values are names or labels, male or female
Interval
Ranges
Interpretive data collection
Interviews, observations
interpretive research designs
case research, action research, ethnography, phenomenology
Experimental design: two-group designs
1. pretest-posttest control group design
2. post-test only control group
- randomly assigned whether you get an intervention or not (placebo)
- participants get randomly assigned to either receive an intervention or not, and then the outcome of interest is measured only once after the intervention takes place in order to determine its effect.
hybrid designs
1. randomized block design
2. solomon four-group design
3. switched replication design
- The treatments are randomly allocated to the experimental units inside each block
- everyone gets assigned one out of 4 groups, all groups have different combinations of either pretest or treatment or both or none.
- In the first phase of the design, both groups are pretests, one is given the program and both are posttested. In the second phase of the design, the original comparison group is given the program while the original program group serves as the “control”.
Face validity
Intuitively asessing wether your measure is measuring your construct
Convergent validity
different measures that measure the same constructs are compared, to make sure that you’re measuring the right thing
Discriminant validity
Measures how non-related different measures are, to reflect that your measure is not measuring a different variable.