Wk 13: Categorical Analysis Flashcards

1
Q

What is the chi-square test?

A
  1. Chi-squared statistic is used to compare the observed counts with the expected counts if the null hypothesis was true.
  2. Null hypothesis: There is no relationship between the categorical value (results) and the groups.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the chi-square statistic?

A

used to compare the observed counts with the expected counts if the null hypothesis was true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the null-hypothesis for the chi-square test?

A

There is no relationship between the categorical value (results) and the groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are 3 interpretations of chi-square test?

A
  1. Expected count of “yes caffeine” = 88/261 = 33.72% of total students
  2. If null hypothesis is correct (i.e. no relationship between caffeine consumption and student groups), then then this rate of caffeine consumption should apply to everyone.
    • e.g. Expected count of “yes caffeine” for SP students = 71 SP students x 33.72% = 23.9 students
  3. But we observed 30 SP students consuming caffeine.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are 2 interpretations of pearson chi-square?

A
  1. Pearson chi-square (p = 0.09) means that if null hypothesis is correct, we would expect this observed counts to happen 9% of the time by chance.
  2. p = 0.09 is less than significance level (p < 0.05), so this is very weak evidence of a difference in rates of caffeine consumption between the three groups.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 2 types of reliability?

A
  1. Inter-rater reliability: Degree to which ratings given by different observers agree.
  2. Intra-rater reliability: Degree to which ratings given by the same observer on different occasions agree
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is inter-rater reliability?

A

Degree to which ratings given by different observers agree.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is intra-rater reliability?

A

Degree to which ratings given by the same observer on different occasions agree.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 4 features of the Cross-tabulating the observations of Rater A and Rater B?

A
  1. Agree is when they both vote “no” or both vote “yes”.
  2. Disagree is when one votes “no” and one votes “yes”.
  3. But percentage agreement overestimates agreement, because they often agree with each other by chance.
  4. Rater A says “yes” 32/54 = 59.3% of time.
    1. Rater B says “yes” 24/54 = 44.4% of time.
    2. So by chance, they would both say “yes” 59.3% x 44.4% = 26.3% of time
    3. Similarly, they would both say “no” 22.6% of time by chance.
    4. Overall, we expect them to agree with each other 26.3% + 22.6% = 49% of time by chance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the Cohen’s Kappa?

A

Cohen’s Kappa (k) is the proportion of times there is agreement for the times not by chance.

  • We adjusted the raw percentage of agreement (77.8%) by the chance rate of agreement (49%).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are 4 interpretation of Cohen’s Kappa?

A
  1. Kappa = 1 if perfect agreement.
  2. Kappa = 0 if no agreement. Can be negative.
  3. Kappa is used as a measure of agreement, rather than as a test statistic.
  4. ○ Kappa = 0.7-0.8 is good
  5. P < 0.05 means significant, so the agreement is not purely by chance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is weighted kappa?

A
  • For any disagreements in ratings, we can measure by how much the scores differed.
  • This allows calculation of a weighted kappa statistic, where the weights indicate the seriousness of the disagreement.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are 3 features of Cronbach’s Alpha?

A
  1. Cronbach’s Alpha (a) is a statistic based on the correlations between items in a scale
    1. This is not the significance level a
  2. It measures whether items on a scale are internally consistent
  3. a = 0 if the items are completely independent
  4. Generally we want a= 0.8, because we would like items on scale to assessing the same construct
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the summary data analysis?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly