Statistics Flashcards
Pearson coefficient
Measures the strength and direction of relationship between two variables - ie linear correlation
0 - no relationship
0-1 or -1-0 = positive or negative linear relationship
Kappa coefficient
Cohen’s kappa coefficient is a statistic that is used to measure inter-rater reliability for qualitative items. It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance.
Linear regression test
Look at cause and effect relationship
estimate the effect of one CONTINUOUS variable on another Try to determine a specific mathematical equation to describe the relationship (line of best fit)
Simple : one continous IV and one continous DV eg effect of income on longevity
Multiple: 2 or more continous IV and one continous DV eg effect of income and mins of exercise per day on longevity
Logistic regression: continuous IV and binary DV eg what is the effect of drug dosage on survival
What do ANOVA and T tests have in common
Parametric
Compare differences between group means
Test the effect of a categorical variable on a quantitative DV
ANOVA- more than one IV, one DV
MANOVA- more than one IV and 2+ DV.What is the effect offlower speciesonpetal length,petal width, andstem length?
Repeated measures ANOVA compares the same group at various time points
Corrolation tests
Check whether variables are relatedwithout hypothesizing a cause-and-effect relationship. I if you know one, can you predict the other
eg Pearsons r
2 continous variables eg how are latitude and temperature related
Spearmans r- 2 ranked/ordinal varibales
Chi squared test
Chi square test of independence: Test if 2 categorical variables are related to each other
Is the species of flower related to petal size
Is there more sporting injuries in basketball compared to netball (compare proportions of people who are injured)
Chi square goodness of fit test: test weather observed frequncies are significantly different to what was expected (equal frequencies/proportion). Null hypothesis would be that there is no difference in proportions in each category
Fishers exact test: like chi squared but if value <5 in one more more cells in data set
Kruskal Wallis test
non parametric version of ANOVA
3 + categories + one quanitative outcome variable
Wilcoxon signed ranke test
non parametric version of paired t test
mann witney u test
non parametric version of independant t test
Bonferroni correction
Post hoc test. The Bonferroni correction is a multiple-comparison correction used when several dependent or independentstatistical testsare being performed simultaneously
If there are more than 2 groups in a varibale and the null hypthesis is rejected with the first statistical test, need to do a Bonferroni to figure out which 2 groups are significantly different from each other. A Bonferroni correction is when you divide your originalsignificance level(usually .05) by the number of tests you’re performin
Absolute risk
the number of events in a group, divided by the number of people in that group
ARR (absolute risk reduction, aka attributable risk, risk difference)
Absolute risk in contol group - absolute risk in treatment group
relative risk
absolute risk in treatment/ absolute risk in control
relative risk reduction
Risk difference/ absolute risk in control
(ARC – ART) / ARC
relative risk reduction
1- relative risk
odd ratio
WITH/WITHOUT
probability of outcome occurring/probability of outcome not occurring
=cross product = AD/BC
odd that case exposed/odds control exposed
= (A/C) / (B/D)