Probability/Statistical Significance Flashcards
What are the two ways studies can screw up?
- caused by chance = random error
2. Not caused by chance = bias or systematic error
What deals with random error in studies?
Statistical inference
If a study has a random error, is it likely to happen again if/when the study is repeated?
NO
An error that is inherent to the study method being used and results in a predictable and repeatable error for each observation is labeled a _____ error. What is it due to?
Systematic error due to bias
T/F: If you repeat a study that had a systematic error, it is likely to happen again
TRUE
these errors are not caused by chance and there is no formal method to deal with them.
What tests will estimate the likelihood that a study result was caused by chance?
Tests of statistical inference
**a study result is called “statistically significant” if it is unlikely to be caused by chance
Do if a study is statistically significant, is it clinically significant?
Not necessarily
Those terms have two different meanings
*even very small measures of association that are not large enough to matter can be statistically significant
What is a chance occurrence?
Something that happens unpredictably without discernible human intention or with no observable cause: caused by chance or random variation
What is random variation?
There is error in every measurement. If we measure something over and over again, we will get slightly different measurements each time AND a few measurements may be extreme
What is statistical inference?
Tells us: if we measure something only once, how sure are we that our measurement has been caused by chance
What two methods are used for estimating how much random variation there is in our study and whether our result was likely to have been caused by chance?
- Confidence intervals
2. P-values
_______ estimates how much random variation there is in our measurement
Confidence intervals
-the range of values where the true value of our measurement could be found
_____ are used to estimate whether the measure was likely to have been caused by chance or not
P values
Will small sample sizes have a large 95% Confidence interval or small CI?
What about large sample sizes?
The larger the sample size, the smaller the confidence interval will be = more precise
- small samples have large CIs
- Large samples have small CIs
How do you interpret this statement?
“prevalence of disease was 8% (95% CI: 4%-12%)”
The estimate of the prevalence from the study was 8%, but we are 95% confident that the true prevalence lies somewhere between 4% and 12%
T/F: If the 95% CI for the odds ratio (OR) does NOT include one, the OR is statistically significant
TRUE
Ex: The odds ration was 3 (95% CI: 0.5 - 6)
**since this includes that the OR could have the value of ONE = it is NOT statistically significant
How do you interpret 95% confidence intervals (95% CI) for odds ratios (OR)?
- OR greater than one, 95% CI does NOT include one : Positive association; statistically significant
- OR greater than one, 95% CI includes one : NO association, NOT statistically significant
- OR less than one, 95% CI does NOT include one : Negative association, statistically significant
- OR less than one, 95% CI included one : No association, NOT statistically significant
If the 95% CI for the relative risk (RR) does NOT include one, the RR (is / is not) statistically significant
IS
*remember, when the RR = one, there is no association between the two test groups
How do you interpret a RR greater than one, combined with a 95% CI that does NOT include one?
Positive association
Statistically significant
How do you interpret a RR less than one, combined with a 95% CI that includes one?
No association
Not statistically significant