Wk 13: Categorical Analysis Flashcards
What is the chi-square test?
- Chi-squared statistic is used to compare the observed counts with the expected counts if the null hypothesis was true.
- Null hypothesis: There is no relationship between the categorical value (results) and the groups.
What is the chi-square statistic?
used to compare the observed counts with the expected counts if the null hypothesis was true.
What is the null-hypothesis for the chi-square test?
There is no relationship between the categorical value (results) and the groups.
What are 3 interpretations of chi-square test?
- Expected count of “yes caffeine” = 88/261 = 33.72% of total students
- If null hypothesis is correct (i.e. no relationship between caffeine consumption and student groups), then then this rate of caffeine consumption should apply to everyone.
- e.g. Expected count of “yes caffeine” for SP students = 71 SP students x 33.72% = 23.9 students
- But we observed 30 SP students consuming caffeine.
What are 2 interpretations of pearson chi-square?
- Pearson chi-square (p = 0.09) means that if null hypothesis is correct, we would expect this observed counts to happen 9% of the time by chance.
- p = 0.09 is less than significance level (p < 0.05), so this is very weak evidence of a difference in rates of caffeine consumption between the three groups.
What are the 2 types of reliability?
- Inter-rater reliability: Degree to which ratings given by different observers agree.
- Intra-rater reliability: Degree to which ratings given by the same observer on different occasions agree
What is inter-rater reliability?
Degree to which ratings given by different observers agree.
What is intra-rater reliability?
Degree to which ratings given by the same observer on different occasions agree.
What are the 4 features of the Cross-tabulating the observations of Rater A and Rater B?
- Agree is when they both vote “no” or both vote “yes”.
- Disagree is when one votes “no” and one votes “yes”.
- But percentage agreement overestimates agreement, because they often agree with each other by chance.
- Rater A says “yes” 32/54 = 59.3% of time.
- Rater B says “yes” 24/54 = 44.4% of time.
- So by chance, they would both say “yes” 59.3% x 44.4% = 26.3% of time
- Similarly, they would both say “no” 22.6% of time by chance.
- Overall, we expect them to agree with each other 26.3% + 22.6% = 49% of time by chance.
What is the Cohen’s Kappa?
Cohen’s Kappa (k) is the proportion of times there is agreement for the times not by chance.
- We adjusted the raw percentage of agreement (77.8%) by the chance rate of agreement (49%).
What are 4 interpretation of Cohen’s Kappa?
- Kappa = 1 if perfect agreement.
- Kappa = 0 if no agreement. Can be negative.
- Kappa is used as a measure of agreement, rather than as a test statistic.
- ○ Kappa = 0.7-0.8 is good
- P < 0.05 means significant, so the agreement is not purely by chance.
What is weighted kappa?
- For any disagreements in ratings, we can measure by how much the scores differed.
- This allows calculation of a weighted kappa statistic, where the weights indicate the seriousness of the disagreement.
What are 3 features of Cronbach’s Alpha?
- Cronbach’s Alpha (a) is a statistic based on the correlations between items in a scale
- This is not the significance level a
- It measures whether items on a scale are internally consistent
- a = 0 if the items are completely independent
- Generally we want a= 0.8, because we would like items on scale to assessing the same construct
What is the summary data analysis?