3 Non-parametric tests Flashcards

1
Q

Q: What are the main differences between parametric and non-parametric statistical tests?

A

A: Parametric tests make specific assumptions about the underlying population distribution, such as normality and equality of variances, while non-parametric tests make fewer or no assumptions about the population distribution. Parametric tests are more sensitive to detecting subtle effects but require the data to meet certain assumptions, whereas non-parametric tests are more robust but less sensitive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q: What are the key assumptions of parametric tests?

A

A: Parametric tests assume that the populations from which the data are sampled are normally distributed, the variances are approximately equal, and there are no extreme scores or outliers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q: Why might one choose to use parametric tests over non-parametric tests?

A

A: Parametric tests are often more sensitive to detecting differences or relationships in the data, especially for smaller effect sizes. They can also provide more precise estimates of population parameters. In contrast, non-parametric tests may be less sensitive and discard some information by focusing on rank rather than absolute scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q: What are some advantages of non-parametric tests?

A

A: Non-parametric tests can be applied more readily when the assumptions of parametric tests are violated or when dealing with data that do not meet parametric assumptions. They are also more robust to outliers or extreme scores in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q: How is ordering and ranking used in non-parametric tests?

A

A: Ordering and ranking data is fundamental to non-parametric tests. In cases where the distributional assumptions of parametric tests are not met, non-parametric tests rely on ranking the data or using the median rather than the mean. Tied scores are handled by assigning them the average of their ranks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q: What is the Mann-Whitney U test, and when is it used?

A

A: The Mann-Whitney U test is a non-parametric alternative to the independent t-test. It is used to compare two independent groups when the assumptions of the t-test are not met or when the data are ordinal or not normally distributed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q: How does the Mann-Whitney U test work?

A

A: In the Mann-Whitney U test, the data are first ranked regardless of which group they belong to. Then, the sums of ranks in each group are calculated. The test statistic U is computed as the lower ranked group’s sum minus the smallest possible sum. A lower value of U indicates that more of the smaller scores are in one group compared to the other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q: What is the Wilcoxon signed ranks test, and when is it used?

A

A: The Wilcoxon signed ranks test is a non-parametric alternative to the paired t-test. It is used to compare two related groups or conditions when the assumptions of the t-test are violated or when dealing with non-normally distributed data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q: How does the Wilcoxon signed ranks test work?

A

A: In the Wilcoxon signed ranks test, the differences between paired observations are computed. The absolute differences are ranked, ignoring their signs but accounting for ties. The test statistic T is calculated as the sum of the ranks of the least occurring sign of the differences. A smaller value of T indicates that one condition consistently has larger or smaller differences compared to the other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q: What is Spearman’s rho, and when is it used?

A

A: Spearman’s rho is a non-parametric correlation coefficient, used as an alternative to Pearson’s r when the assumptions for Pearson’s r are not met. It is particularly useful when dealing with ordinal or non-normally distributed data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Q: How is Spearman’s rho calculated?

A

A: To calculate Spearman’s rho:

Convert the raw data into ranks.
Compute the differences in ranks for each pair of data points.
Square each difference.
Calc spearmans rho.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q: What are some limitations of Spearman’s rho?

A

A: Spearman’s rho may not work well with tied ranks, leading to inaccuracies in the calculation. In cases of tied ranks, the formula becomes more complex, and alternative methods may need to be employed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q: What is a one-variable chi-squared test, and when is it used?

A

A: A one-variable chi-squared test is used to assess whether observed frequencies in categories are significantly different from what is expected. It is employed when there is an interest in determining whether the distribution of categorical data deviates from a specified pattern or expectation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q: How is the chi-squared statistic calculated in a one-variable chi-squared test?

A

A: To calculate the chi-squared statistic in a one-variable chi-squared test:

Compute the difference between the observed
(O) and expected
(E) frequencies.
Square each difference.
Divide each squared difference by the expected frequency.
Sum up these values to obtain the chi-squared (x^2 ) statistic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Q: What is the purpose of a two-variable (2x2) chi-squared test?

A

A: The purpose of a two-variable (2x2) chi-squared test is to assess whether there is a relationship between two categorical variables. It determines whether the observed counts in various categories significantly deviate from the counts that would be expected if there were no association between the variables.

(sum of column/total) *sum of row
How well did you know this?
1
Not at all
2
3
4
5
Perfectly