Stats and Research Methods Flashcards

1
Q

What is the scientific method according to Karl Popper?

A

Science is being wrong on purpose

—> problem —> attempted solution —> elimination —>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the scientific method according to David Spiegelhalter?

A

Science is being methodical

—> problem—> plan—>data—> analysis—> conclusion—>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How are odds presented?

A

in fractions
e.g. chance ofgetting tails is 1/2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is probability presented?

A

in a percentage
e.g. probability of getting tails is 50%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are p-values?

A

P-values are the probability of getting our result if there’s no effect in the whole population

If the p-value is less than .05 then it is ‘significant’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the problems with P-values?

A
  • significance tells us nothing about importance
  • p<.05 is arbitrary and encourages all or nothing thinking
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What do inferential statistics tell you?

A

tell you how likely it is that there’s no effect in reality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an experiment?

A
  • an experiment is defined by its use of randomisation
  • this way we can be reasonably sure any differences between groups are because of our manipulation
  • experiments are the gold standard in science because they let you infer causation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is it hard to infer causation without an experiment?

A

The 3rd variable problem

  • in non-experimental designs you look at events or groups that you did not create and try and infer their role in producing an outcome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the difference between disinformation and misinformation?

A

disinformation = lying
misinformation = being wrong and telling people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is deductive reasoning?

A
  • general to specific
  • if the initial premises are true, the conclusion must be true
  • logical arguments of this sort are intrinsically easy to disprove
  • this trait is called “falsifiability”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is inductive reasoning?

A
  • specific to general
  • it is possible for initial premises to be absolutely true, but the conclusion false with this form of argument
  • widely (and necessarily) used in science
  • inductive arguments cannot be falsified so they are intrinsically less robust
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the pros and cons of deduction?

A
  • major breakthroughs come from falsifying previously accepted theories
  • rigorous
  • but … rejects descriptive research that doesn’t try to falsify theories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the pros and cons of induction?

A
  • precise description of breakthroughs requires exploratory work where we try to generalise
  • inferential statistics are inductive by nature
  • but… without clear failure criteria science dissolves
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does meta-analysis calculate?

A

meta-analysis calculates a composite effect by assigning more weight to powerful studies
- it treats studies with higher sample sizes as more informative, they are given a bigger weighting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the ‘File Drawer problem’?

A

When “unwanted” results are less likely to be published, resulting in them ending up in someone’s “file drawer” rather than in a searchable form online

This leads to the literature being biased towards certain types of results

17
Q

Why might results be “unwanted”?

A
  • non-significant p-values
  • effects which contradict the “received wisdom”
18
Q

What are 4 things that can lead to biased evidence, making ‘evidence based dentistry’ not as simple as it seems?

A
  • study publication bias
  • outcome reporting bias
  • spin
  • citation bias
19
Q

What is study publication bias?

A

The file drawer problem

Less of the negative studies will be published than the positive ones

20
Q

What is outcome reporting bias?

A

When negative studies that are published focus more in the positives than the null results - trying to make it sound more positive

21
Q

What is spin?

A

When outcomes/results of studies are spun to make them sound better, not giving a fully accurate appraisal of what happened in detail

22
Q

What is citation bias?

A

Out of the published papers, more of the positive ones are cited in other literature compared to negative ones, making it seem like there is more evidence for the positive outcome

23
Q

What does a funnel plot show?

A
  • if our results are unbiased, we expect equal numbers of studies on either side of the composite effect size
  • we expect the large studies to be less biased so the graph should be broadly symmetrical at the top
24
Q

Are small or big studies more likely to be biased? Why?

A

Small studies more likely to be biased than the big studies, because they can disappear without a trace but big studies with lots of funding more likely to be published

25
Q

What phenomenon looks like publication bias, but isn’t?

A

the “small study effect”

26
Q

What is the “small study effect”?

A

When results appear to be becoming less significant, but it’s because the studies are getting bigger and bigger, and including participants that won’t show as much of an effect

e.g. in a small drug study the participants are those that are very ill, but as the study expands it starts to include people that may just be at risk of the disease

27
Q

What does positive predictive value show?

A

how many of the people that test positive that actually have the disease

28
Q

What does negative predictive value show?

A

how many of the people that test negative that are actually negative

29
Q

How do you calculate false positive rate?

A

false positive / false positive + true positive

30
Q

How do you calculate false negative rate?

A

false negative / false negative + true negative

31
Q

What are the 2 types of errors encountered in science?

A

Type 1 error - false positive
Type 2 error - false negative

32
Q

When does a type 1 error occur?

A

False positive
- when we think there is an effect when it reality there isn’t

(p-values (alpha level) are connected to the probability of making a type 1 error)

33
Q

When does a type 2 error occur?

A

False negative
- When we think there isn’t an effect when there really is

(connected to beta level, experiments are usually designed so that beta < 20%)

34
Q

What is statistical power?

A

the probability that you will find an effect where that effect actually exists
-> 1 - beta

35
Q

What does statistical power depend on?

A
  • the effect size sought
  • the p-value used as a criterion (typically .05)
  • the type of statistical test that will be employed
  • the sample size
36
Q

What is statistical power needed to calculate?

A

needed to calculate required sample sizes