Week 7: Inferential Stats part 1and 2 Flashcards
What is a hypothesis
Theories underlying our interventions
What are the two types of Hypothesis
Null hypothesis: assumed that no difference exists between the samples.
Alternative hypothesis p: if the null hypothesis is rejected, then differences does exists
What are the 5 stages in hypothesis testing
1) state the hypothesis being tested eg there will be no difference in…
2) choose the appropriate statistics to test the null hypothesis
3) define the degree of risk you are willing to accept if you find null hypothesis is true when you assumed it was false. Ie no difference does exist.
- so how confident are you that any difference is a true difference, and not a natural random variation. Ie because of the manipulation etc.
- a level of 0.05 is typically used
- also called the alpha level “a”
4) calculate the statistics from the sample observations.
- calculate the P value: the probability that the observed, or more extreme results, would be found if no difference actually existed.
- measure of probability and ranges between 0-1
- refer to pg 8 of notes
5) decide whether to accept or reject null hypothesis based on statistics and degree of risk established
- compare the P value to the a level
- P tells us how likely the actual results would be due to a chance occurrence
- a tells us how low the probability of a difference being due to chance we are willing to accept
- if P<a> a accept the null hypothesis</a>
Meaning of each conclusion.
What If you reject the null hypothesis, but find there is NO difference between samples?
What if you accept the null hypothesis but find there is a difference in the population?
Type 1 error
Type 2 error
Alpha=a=
Beta = B
Alpha=a= the probability of committing a type 1 error (usually 0.05)
- reject the null hypothesis when it is true
- we say there is a difference when none exists
Beta = B = probability of committing a type 2 error
- failing to reject null hypothesis when it is false
- we say there is no difference when a difference exist
Cautions about the P value
P values are probabilities
-do not describe the magnitude of the difference
-clinical significance vs. statistical significance
The choice of o.o5 is arbitrary. Why 5%? If its less than 5% then its statistically true.
The idea the lower the p value is the lower the chance of error, but it doesn’t tell you anything about the magnitude
What is the 4 pillars of power. Ie what effects power, there are 4 things that have some effect on the power of a study. Ie the ability of a study to detect a difference if a true difference does exist.
- Alpha level: the higher the alpha level the higher the power. That means we are more accepting of making an incorrect assumption. The more we are willing to make that error the more likely we are able to find that difference.
- Effect size: strong effect-> the difference in the effect of treatment between groups. The more powerful the treatment is the large the effect size will be so the easier it is to observe a difference between 2 groups.
- Variance: the more variable data is the more difficult It is to identify a difference. The more variability the less power you have. Increasing sample size reduces variability.
- Sample size- the larger the sample, the greater the statistical power. Smaller samples are less likely to represent population.
Power is increased with: ?
Increase in alpha: since alpha is usually fixed on 0.05, it rarely impacts power
Increase in effect size: bigger difference in sample means
Decrease in variance: variability in the frequency distribution.
-as data becomes less variable, power is increased
Increase in sample size
-the larger the power the greater the statistical power
What is statistical significance?
What is clinical importance?
Statistical significance: when the result is unlikely to have occurred by chance ie is a true effect
Clinical importance: Is the result meaningful to the patient?
-The minimum clinically important difference (MCID) is the minimum level of change of an outcome measure that is considered to be clinically relevant (important to patient)
Eg what is change in lumbar flexion pre and post
Change = 27.5 degrees, p= 0.02
P< a = statistically significant
P-value: interpretation
How do we know when to accept or reject the null hypothesise?
Pre-determine alpha (usually 0.05)
- if P is less than a: reject null hypothesis
- if P is greater than alpha: accept the null hypothesis
Confidence intervals:
What is the point estimate (eg p value)?
Defines the range that includes the ‘true’ number 95% of the time
-most commonly used to define group differences
Eg if
Group A pain score was: 8/10
Group B pain score was: 2/10
Difference score: 6
95% confidence interval = (4,8) if we repeated this test over and over, we can be 95% certain that the true scores is between 2 and 8.
The point estimate: eg p value is a single value that represents the best estimate of the population value (mean)
Confidence intervals:
The degree of precision is determined by the width of the confidence interval. Width is influence by?
- sample variance
- sample size
- alpha level
What is the difference between a negative trial and a positive trial?
Negative trial:
-when there is no significant treatment effect
-P > 0.05 and the 95% confidence interval includes 0
-fail to reject the null hypothesis
Positive trial: when there is a significant treatment effect P< 0.05 and the 95% confidence interval doesn’t include 0
-reject the null hypothesis
What is the difference between definitive and not definitive? Give examples
You examine the lower bound of the confidence interval. If the lower bound is still clinically meaningful, the positive result may be considered “definitive”
-if the lower bound isn’t clinically meaningful the positive result should not be considered “definitive” possible type 1 error
Examine the upper bound of the CI
-if the upper bound would be considered clinically meaningful, the negative result shouldn’t be considered “definitive”