Stats and Research Methods Flashcards
What is the scientific method according to Karl Popper?
Science is being wrong on purpose
—> problem —> attempted solution —> elimination —>
What is the scientific method according to David Spiegelhalter?
Science is being methodical
—> problem—> plan—>data—> analysis—> conclusion—>
How are odds presented?
in fractions
e.g. chance ofgetting tails is 1/2
How is probability presented?
in a percentage
e.g. probability of getting tails is 50%
What are p-values?
P-values are the probability of getting our result if there’s no effect in the whole population
If the p-value is less than .05 then it is ‘significant’
What are the problems with P-values?
- significance tells us nothing about importance
- p<.05 is arbitrary and encourages all or nothing thinking
What do inferential statistics tell you?
tell you how likely it is that there’s no effect in reality
What is an experiment?
- an experiment is defined by its use of randomisation
- this way we can be reasonably sure any differences between groups are because of our manipulation
- experiments are the gold standard in science because they let you infer causation
Why is it hard to infer causation without an experiment?
The 3rd variable problem
- in non-experimental designs you look at events or groups that you did not create and try and infer their role in producing an outcome
What is the difference between disinformation and misinformation?
disinformation = lying
misinformation = being wrong and telling people
What is deductive reasoning?
- general to specific
- if the initial premises are true, the conclusion must be true
- logical arguments of this sort are intrinsically easy to disprove
- this trait is called “falsifiability”
What is inductive reasoning?
- specific to general
- it is possible for initial premises to be absolutely true, but the conclusion false with this form of argument
- widely (and necessarily) used in science
- inductive arguments cannot be falsified so they are intrinsically less robust
What are the pros and cons of deduction?
- major breakthroughs come from falsifying previously accepted theories
- rigorous
- but … rejects descriptive research that doesn’t try to falsify theories
What are the pros and cons of induction?
- precise description of breakthroughs requires exploratory work where we try to generalise
- inferential statistics are inductive by nature
- but… without clear failure criteria science dissolves
What does meta-analysis calculate?
meta-analysis calculates a composite effect by assigning more weight to powerful studies
- it treats studies with higher sample sizes as more informative, they are given a bigger weighting
What is the ‘File Drawer problem’?
When “unwanted” results are less likely to be published, resulting in them ending up in someone’s “file drawer” rather than in a searchable form online
This leads to the literature being biased towards certain types of results
Why might results be “unwanted”?
- non-significant p-values
- effects which contradict the “received wisdom”
What are 4 things that can lead to biased evidence, making ‘evidence based dentistry’ not as simple as it seems?
- study publication bias
- outcome reporting bias
- spin
- citation bias
What is study publication bias?
The file drawer problem
Less of the negative studies will be published than the positive ones
What is outcome reporting bias?
When negative studies that are published focus more in the positives than the null results - trying to make it sound more positive
What is spin?
When outcomes/results of studies are spun to make them sound better, not giving a fully accurate appraisal of what happened in detail
What is citation bias?
Out of the published papers, more of the positive ones are cited in other literature compared to negative ones, making it seem like there is more evidence for the positive outcome
What does a funnel plot show?
- if our results are unbiased, we expect equal numbers of studies on either side of the composite effect size
- we expect the large studies to be less biased so the graph should be broadly symmetrical at the top
Are small or big studies more likely to be biased? Why?
Small studies more likely to be biased than the big studies, because they can disappear without a trace but big studies with lots of funding more likely to be published
What phenomenon looks like publication bias, but isn’t?
the “small study effect”
What is the “small study effect”?
When results appear to be becoming less significant, but it’s because the studies are getting bigger and bigger, and including participants that won’t show as much of an effect
e.g. in a small drug study the participants are those that are very ill, but as the study expands it starts to include people that may just be at risk of the disease
What does positive predictive value show?
how many of the people that test positive that actually have the disease
What does negative predictive value show?
how many of the people that test negative that are actually negative
How do you calculate false positive rate?
false positive / false positive + true positive
How do you calculate false negative rate?
false negative / false negative + true negative
What are the 2 types of errors encountered in science?
Type 1 error - false positive
Type 2 error - false negative
When does a type 1 error occur?
False positive
- when we think there is an effect when it reality there isn’t
(p-values (alpha level) are connected to the probability of making a type 1 error)
When does a type 2 error occur?
False negative
- When we think there isn’t an effect when there really is
(connected to beta level, experiments are usually designed so that beta < 20%)
What is statistical power?
the probability that you will find an effect where that effect actually exists
-> 1 - beta
What does statistical power depend on?
- the effect size sought
- the p-value used as a criterion (typically .05)
- the type of statistical test that will be employed
- the sample size
What is statistical power needed to calculate?
needed to calculate required sample sizes