Causal Research Flashcards
When can we assume causality?/When are observations dependent?
- concomitant variation
- > evidence that X and Y are correlated as predicted by the hypothesis
- time order of variables
- > evidence that shows X occurs before Y
- elimination of alternative explanations
- > evidence that allows the elimination of factors other than X as the cause of Y
Single-item measurement vs. multi-item measurement
- single-item measurement is more efficient but reduces the quality of the measures
Reflective construct
Causal relationship from the construct to the item (e.g. state of being drunk induces sociability, orientation disorder, memory loss)
Formative construct
The items define the construct (e.g. beer, wine, liquor induce the state of being drunk)
Experiment
- controlled manipulation of one or more independent variables
- observation of variation in the dependent variables
Laboratory experiment
- certain conditions as a result of a created situation
- IVs are manipulated
- other variables can be controlled perfectly
- high internal validity (degree of confidence that the causal relationship being tested is trustworthy and not influenced by other variables)
Field experiment
- realistic situation
- IVs are manipulated
- other variables can be controlled as carefully as the situation permits
- high external validity (extent to which results from a study can be applied/generalized to other situations, groups or events)
Experimental design
- pre-experimental designs
- > pre-test, post-test
- > static group comparison
- true experimental designs (RCTs)
- > before-after with CG
- > after only with CG
- quasi-experimental design
- > time-series experiment
Types of validity
- Content validity
- > Does a variable reflect what you want to measure?
- Construct validity
- > convergent validity: are the items of the same variable strongly related to one another?
- > discriminant validity: are the items of a variable unrelated to different constructs?
- Criterion validity
- > Does a variable relate to others as predicted by theory?
- > test relationship with theoretically connected variables
To ensure the reliability of a measure determine the convergent validity…
…at the item level
- > item-to-total correlation
- > correlation of every single item with the average of all items
…at the construct level
- > Cronbach’s alpha
- > average of all correlations
Steps in hypothesis testing
- Formulate the hypothesis
- Select an appropriate test and check assumptions
- Choose the significance level
- Calculate the test statistic
- Compare the test statistic (p-value) to the critical value (significance level)
- Interpret the results
Statistical power
- the probability of rejecting the null hypothesis when it is in fact false (i.e. of not making a type II error)
- defined by 1-ß (ß = probability of making a type II error)
- alpha is around equal to 1/ß
Effect size
Difference between the assumed value under the null hypothesis and the true (unknown) value
How to control the probability of incorrect findings
Both alpha and ß can be controlled by increasing the sample size: for a given level of alpha, increasing the sample size will decrease ß
Type I error (alpha-error)
- rejecting H0 although it is true
- probability: alpha