Lecture 9 Flashcards
Experimental Design
Why do we need good experiments?
Bad ones waste money, time, resources and effort. They can also have real-world consequences
Two classes of research design:
-Observational/ correlational/descriptive
-Manipulative/experimental
O/C/D Research
Looks for patterns and/or associations. Correlational; cannot prove causation. e.g. association between an inherited genotype and diabetes
M/E Research
Manipulates conditions. Stronger design; tests cause and effect.Seeks to prove causation. e.g. create transgenic animal model with mutation in the suspect gene, compare glucose metabolism against normal animal
Experimental design essentials
Clear and precise question, Controls/comparator group,
Randomisation and Repetition
Observational studies
Easier to collect data in ‘natural’ settings without interfering with what you’re observing. Easier than manipulative studies. Very often categorical data
Reverse Causation
group B has an effect on group A
(Third Factor) Confounding variable
Group C has an impact on groups A and B
Why do we need controls?
Can’t tell difference between a real treatment effect and a random change over time. Also: Placebo effect
Why do we replicate experiments?
Individuals may vary
RCT
The ‘Gold Standard’ study design for evaluating interventions. Randomised (sometimes double-blinded/ placebo) Controlled Trial
Measurement Validity
The degree to which any measurement approach or instrument succeeds in describing or quantifying what it is designed to measure
What is Bias?
A systematic error (caused by the investigator or the subjects) that causes an incorrect (over- or under-) estimate of an association
Most common type of bias
Selection Bias. (Control selection bias, Loss to follow-up bias, Self-selection bias, “Healthy worker” effect and Differential referral or diagnosis of subjects)
Double-Blind Study
The treatment vs control status is unknown to the researchers as well as to the subjects. Avoids the possibility of both measurement bias and the placebo effect
False Positive
Type I Error: We reject the null hypothesis when it is true and in reality there is no difference or association
False Negative
Type II Error: We accept the null hypothesis when it is false and in reality there is a difference or association
Errors with Multiple Comparisons
By using a p-value of 0.05 as the criterion for significance (α) we’re accepting a 5% chance of a false positive (of calling a difference significant when it really isn’t)
Power and sample size
Statistical power is the likelihood of detecting a significant difference or significant association, by sampling, if it actually exists between the underlying populations.
Too few subjects = total waste
Too many subjects = partial waste
If sample size is too small…
Confidence intervals are too broad, so we are unlikely to detect, with statistical significance, a true difference between means