Hypothesis Testing Flashcards
In hypothesis testing, what do we always assume is true?
The null hypothesis
What is the null hypothesis?
This states there is no difference between the variables of interest
What value is used to calculate the likelihood or probability that the difference observed happened by chance?
The p value
What does a p-value of 0.02 signify?
That the probability your scenario happened by chance is only 2 in 100
When is the null hypothesis rejected?
When the p-value is below the significance threshold
What does it mean if the p-value is large/above significance threshold?
You fail to reject your null hypothesis therefore no evidence exists for the difference - it is likely due to chance
What is the commonly used cut-off for p-value? Why is this not a universal figure?
0.05
For studies such as GWAS, a much lower p-value is required
What is a type I error also known as and when does this happen?
This is a false positive and occurs when you reject the null hypothesis even though it is actually true
What is the frequency of having a type I error/false positive?
This is the same as the value you use for significance cut off
What is a type II error also known as and when does it occur?
This is a false negative and occurs when you fail to reject the null hypothesis even though it is actually false
What is type II error or false negative dependent on?
Sample size
The choice of statistical test used to determine your p-value depends on what three key factors?
- Study design (paired or independent)
- Outcome variable (continuous or categorical)
- Distribution (normal or non-normal)
how is a t-statistic calculated?
For independent data, it is calculated by taking the observed mean difference and dividing this by the standard error of difference between the means
What three assumptions does a t-test make?
- Data is continuous
- Data is normally distributed
- Variance in the two groups is equal (levene’s test)
What does levene’s test do and why is it important?
Levene’s test helps assess whether the variance between two groups is equal. This is used when interpreting t-test results:
- If levene’s test is >0.05 then we accept the null hypothesis and interpret the results relating to ‘equal variances assumed’
What are your options if the assumptions for a parametric are untrue?
- Transform the data
- Check the normality again. If ok - use a parametric test
- If not ok, use a non-parametric test
What transformations can you attempt if your data is:
- moderately positively skewed
- strongly positively skewed
- weakly positively skewed
- log transform (logx)
- reciprocal (1/x)
- square root (rootx)
What transformation method would you use if your data was:
- moderately negatively skewed
- strongly negatively skewed
- unequal variation
- square (x2)
- cube (x3)
- log/reciprocal/squareroot
What are the advantages of a non-parametric test? What are the disadvantages?
- make no assumption about underlying distribution of data
- less powerful than parametric
- difficult to get CIs
What is the non-parametric equivalent of a t-test?
Wilcoxon rank sum test or Mann-Whitney u test
Describe how a wilcoxon rank sum test works
- two independent groups: group1 and 2 where group1 is the smallest size group
- rank all observations into ascending order
- sum ranks for group 1 = test statistic T
- look up T on wilcoxon rank sum table of critical values to get P-value
What non-parametric is used for skewed data with more than two independent exposure groups? What is its parametric equivalent?
Kruskal-Wallis test
Parametric equivalent = ANOVA
What test is used to compare two binary categorical variables and obtain a p-value?
Chi squared test
What does the p-value of a chi-squared test tell us?
How likely the differences between our variables would have occurred by chance if there was truly no association
How do you calculate the chi-square test statistic?
This involves working out how close the observed values in your table are to the values expected if there was no true association
You first have to work out the expected numbers for each cell of your table. General formula: (row total x column total)/overall total
The next step is to then calculate the chi-square statistic for each cell then total these together. General formula: (O-E)squared/E
How do you interpret your chi-square statistic?
The larger the chi-square value, the less consistent the data are with the null hypothesis
Usually use a stats package to obtain a p-value but can use stats tables based in degrees of freedom
How do you calculate your degrees of freedom?
Degrees of freedom = (Rows-1) x (Columns-1)
E.g. For a 2x2 table it would be (2-1) x (2-1) = 1 d.f
Why do we we need both the Odds ratio AND the p-value?
The OR tells us the magnitude of an association whilst the p-value tells us the significance of this
What are the assumptions of chi-squared?
- Each subjects contributes data to only one cell (I.e. You can’t be a smoker AND a non-smoker)
- The expected count in each cell should be at least 5 (SPSS will give you a warning)
If your expected count are not all >5 in your table, should you use a chi-squared test? If not, what should you do instead?
- Yes - but you must then use the Yates Continuity Correction
- No - you should use Fishers Exact Test instead
When using chi-squared test for tables bigger than 2x2, what could you do if you don’t meet the assumptions?
Combine the rows or columns with small numbers, if biologically plausible
What is chi-squared test for trend?
This is a special test for when the exposure variable is ordered (not nominal) and the outcome is binary
For example, looking at disease (yes/no) across ordered age groups (
How do you visualise correlation?
Scatter diagram
On a scatter diagram, on which axis is the outcome plotted?
Vertical/y-axis
What does correlation measure? What do the correlation coefficients (r) mean?
Measures the closeness or degree of association between two continuous variables
\+1 = perfect positive association -1 = perfect negative association 0 = no association
What are the two main types of correlation coefficient and when would you use each of them?
Pearsons correlation coefficient and Spearmans correlation coefficient
Pearsons is used when the variables are normally distributed. If they aren’t you can transform them and then use this, or instead you can use Spearmans
What does correlation NOT take into account?
The gradient/steepness of slope
What does the r-squared value do? Please give an example
This is the proportion of the variance of the outcome variable which is explained by the exposure variable
E.g. An r-squared of 0.64 for correlation between BP and stress means 64% of the variation in BP is explained by stress
Does correlation equal causation?
No. Possibilities are:
- X influences or causes Y
- Y influences X
- Both X and Y are influenced by one or more other variables (confounders)
Why should you interpret correlation results from large cohorts with caution?
Because you can get a significant result for a very weak correlation
What is the difference between correlation and regression?
Correlation is used to assess if two variables are related and how closely
Regression is used to describe/model the rship or make predictions
What does linear regression do?
States how much y (outcome) increases/decreases as X (exposure) increases
Estimates a best-fit straight line through the data
What is the equation used in linear regression?
Y=a+bx
a=the intercept, i.e. the value of Y when X=0
b=the slope of the line that tells us on average how much Y increases/decreases for each unit increase in X. It is an estimate of the magnitude of effect
What do the values of regression coefficient ‘b’ mean?
Positive B = outcome increases as exposure increases
Negative B = outcome decreases as exposure increases
B=0: outcome and exposure not related
Consider two variables weight and systolic BP. Your values of a and b are 98.5 and 0.43 (95% CI 0.34-0.51), respectively. Your p-value is
Estimates that for every kg increase in weight, BP increased on average by 0.43 mmHg
You are 95% confident this increase is between 0.34 and 0.51 mmHg
This is highly statistically significant
When is multiple linear regression used?
When you want to include two or more exposure variables
For example, to look at age and weight in reference to BP and obtain an age-adjusted regression coefficient