Chance - II Flashcards

1
Q

what are p values?

A

probability of getting study estimate (or one further from the null), when there is really no association, just because of sampling error (chance)
- if the probability is really low, then unlikely that estimate is due to sampling error (chance)
- uses logic of hypothesis testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are the two hypothesis that have roles in p values, and what do they tell us about the p value?

A

the null hypothesis (H0): there really is NO association in the population
- parameter equals null value

the alternative hypothesis (Ha): there really IS an association in the population
- parameter does not equal null value

p value tells us about the portability of finding an association when there truly isn’t one (ie. the null hypothesis is true)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how do we interpret p values?

A

we set a threshold for this occurring because of sampling error (called type-I error) - and this shows how comfortable we are with being wrong
- usually set at 5% (0.05)

Statistical significance:
p<0.5:
- reject H0
- accept Ha
- association is: “statistically significant”

p>0.5:
- fail to reject H0
- reject Ha
- association is “not statistically significant”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

example of reporting a study of association between exposure and outcome

A
  1. ‘exposure’ were ‘measure of association’ times as likely to develop ‘outcome’ compared with ‘comparison
  2. the probability of a ‘measure of association’ of ‘value’ or further from the null, when the null hypothesis is true, is 0.01
  3. since the p-value is less than 0.05 the association is statistically significant. we reject the null hypothesis and accept the alternative hypothesis. chance is an unlikely explanation of the study finding.
    OR
  4. since the p-value is more than 0.05 the association is not statistically significant. we fail to reject the null hypothesis and reject the alternative hypothesis. the study finding is consistent with chance as an explanation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is a type-II error?

A

incorrectly fail to reject H0 when we should have (when we say there is no association when there actually was)
- p should have been <0.05 but got >0.05

typically due to having too few people in the study
- bigger sample size = more likely to get small p
- smaller sample size = more less likely to get small p

statisticians can calculate power to find out how many participants are needed to minimise chance of a type-II error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

describe the p values relationship to confidence intervals

A

you can see whether a p-value is greater or less than 0.05 with a 95% confidence interval

95% CI included null value?
Yes:
- p>0.05
- not statistically significant
No:
- p<0.05
- statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are the three reasons why p-values are problematic (falling out of favour)

A
  • arbitrary threshold
  • only about H0
  • nothing about importance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

describe problem of p-values being an arbitrary threshold

A
  • statistically significant threshold is arbitrary and artificial
  • is p=0.06 that different to p-0.04?
  • always useful to report p-values rather than just ‘statistically significant’ or ‘not statistically significant’
  • at 5% threshold will still find a statistically significant association when there really isn’t one at least one time in twenty (type-I error) - how do you know that your study isn’t the one that’s wrong?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

describe the problem of p-values only only being about H0

A
  • just give evidence about consistent with the null hypothesis
  • doesn’t say anything about precision (best presented with CI’s)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

describe the problem of p-values saying nothing about importance

A
  • statistically significant is NOT clinical significant (if you include enough people in the study you will find a statistically significant value, but that doesn’t tell you if it is actually clinically significant)
  • don’t say anything about whether results are valid, useful or correct (says nothing about bias or confounding and stuff)
  • absence of a statistically significant association is not evidence of absence of a real association (when people don’t find an association they assume there isn’t one, which is false, they just didn’t find it)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly