Decision-making And Statistical Inference Flashcards
Basic steps for testing research hypothesis and statistical analysis
1) determine the null and alternative hypotheses
2) collect data and summarize evidence with a summary statistic and CI
3) determine the p-value
4) make a decision on which hypothesis is more likely
Null vs alternative hypothesis
Null “Ho”=. No change/no difference
- this hypothesis is presumed until proven other wise
- the goal of the study is to prove this wrong
Alternative “Ha”= Change/difference present
- this hypothesis is not presumed until proven otherwise
- the goal of the study it to prove this right
False positive (a) vs false negative (b)
False positive = rejection of a true null hypothesis
- this is deemed positive since rejecting the null is a “positive” outcome of a study
False negative = not rejecting a false null hypothesis
A = the percentage chance of a false positive
B = The percentage chance of a false negative
One-sided vs two-sided alternative hypothesis
One-sided = compared to a placebo, the drug is either better or worse
Two-sided alternative = Compared to an actual drug, the drug is either better or worse.
What is a summary statistic
Statistic that summarizes the collected data and generally contributes to tested hypotheses
Ex: mean,mode,median, standard deviation
These are also referred to as a point estimate
Are confidence intervals or summary statistics better?
Confidence intervals are better
- they provide more information than point estimates
- provide a probabilistic range for the summary statistic (generates a chance that the population statistic will fall within the standard deviation)
- confidence intervals shrink as population size grows
example: CI =0.95 can also be worded as “we are confident that if the study were repeated 100 times, the population statistic will appear within this interval
NOTE: CI does NOT mean there is a definitive 95% chance the population statistic is within the interval or that 95% of data values are within the CI
P-value
The probability of observing a test statistic as an extreme
- probability of a false positive
If the P value is < a
- we reject null hypothesis and say the result is statistically significant
If the P value is > a
- we dont reject the null hypothesis and say the result is not statistically significant
a is usually 0.05
Issues with P-values
Only is valid for a single comparison (since it is compared to “a”)
- can only be with 1 statistic
Equivalence testing
Used to determine if things are similar within a boundary (‘d’)
- used ONLY after a null hypothesis is confirmed
- switch the null and alternative hypothesis and retest again
Essentially Showing that the null hypothesis is true.
Statistical vs clinical significance
Statistical significance says nothing about the relevance of results
- this is why p values are starting to fall out of favor
P values vs confidence interval
P value = the conditional probability
- a statement that suggests how likely a result would be assuming the null hypothesis is true
P < a = the result depicted in the study suggests that the null hypothesis is not true (reject Ho)
- very dependent on sample size
Confidence interval = provides a magnitude for uncertainty around the summary statistic
- has a 1:1 relationship w/ p-value
- IF IT CONTAINS THE HYPOTHESIZED VALUE FROM Ho, IT IS NOT SIGNIFICANT
Non-inferiority testing
Shows the difference between effects of two drugs (usually negative effects)
- used heavily in pharmacology
Null hypothesis “Ho” = mean difference between two groups is less than “M”
Alternative hypothesis “Ha” = mean difference from two groups is greater than or equal to “M”
M = non-inferiority matin
Two types of errors
Type 1: False positive (a) = reject a true null
Type 2: False negative (b) = dont reject a false null
Significance level (a)
Decision threshold upon which to state a result is significant
It is the probability of a type 1 error
Confidence level
Probability of failing to reject if the null is true (1-a)
Chance that the research is true and not bias