Data Analysis 1-3: Comparing means & medians, comparing Frequencies Flashcards

1
Q

How would you generate appropriate hypotheses that reflect the purpose of a study

A

Hypotheses are testable predictions that are derived from theories.

They allow us to make logical sense of the relationship between two or more variables.

1) Understand the Study Purpose: Clearly identify your research question and the issue you’re trying to solve

2) Literature Review: Review related research to understand current theories and identify gaps your study could fill

3) Identify Variables: Pinpoint key variables to study, derived from your research question and literature review

4) Formulate Hypotheses: Create hypotheses that express assumed relationships between your variables, based on existing theories and literature

5) Check for Testability: Ensure your hypotheses are testable; they should propose a measurable and analysable relationship between variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How would you draw appropriate conclusions based on the outcome of tests of statistical significance

A

1) Understand the Results: Start by understanding what your test statistics mean. For example, a p-value is the probability that the observed data would occur given that the null hypothesis is true

2) Compare P-Value to Significance Level: If your p-value is less than your predetermined significance level (commonly 0.05), you can reject the null hypothesis in favour of the alternative hypothesis

3) Interpret Effect Size: While p-values tell you if an effect exists, effect size tells you how large this effect is. Larger effect sizes typically mean your findings have more practical significance

4) Consider Confidence Intervals: Confidence intervals provide a range of values within which the true population parameter is likely to fall. They give an idea about the precision and uncertainty of your estimates

5) Draw Conclusions: Based on the above, draw conclusions about your hypotheses. Remember that statistical significance does not necessarily mean practical or scientific significance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How would you perform commonly used statistical tests comparing two groups (Paired/unpaired t-test, Mann-Whitney U-test, Wilcoxon Signed Rank) and interpret and apply the result

A

Statistical tests are used to determine if there’s a significant difference between two groups

1) Paired T-test:

  • Application: Use when you have two related or paired groups, e.g., a before-and-after study (Parametric data - Normally Distributed)
  • Steps:
  1. Calculate the differences between paired observations
  2. Find the mean and standard deviation of these differences
  3. Use these to calculate the t-statistic and compare it to a critical value from the t-distribution, or calculate a p-value
  • Interpretation: If the p-value is less than your chosen significance level (e.g., 0.05), there is a significant difference between the means of the paired groups

2) Unpaired T-test (Independent Samples T-test):

  • Application: Use when you have two independent groups, e.g., males and females (Parametric data - Normally Distributed)
  • Steps:
  1. Calculate the means and standard deviations for each group
  2. Use these to calculate the t-statistics and compare it to a critical value from t-distribution, or calculate a p-value
  • Interpretation: If the p-value is less than your chosen significance level, there is a significant difference between the means of the independent groups

3) Mann-Whitney U Test (Wilcoxon Rank-Sum Test):

  • Application: Use when you cannot assume normality of your data and you have two independent groups (Non-parametric data)
  • Steps:
  1. Rank all observations from both groups together from smallest to largest
  2. Calculate U (or the sum of ranks for each group) and compare it to a critical value from the U-distribution, or calculate a p-value
  • Interpretation: If the p-value is less than your chosen significance level, there is a significant difference between the distributions of the independent groups

4) Wilcoxon Signed-Rank Test:

  • Application: Use when you cannot assume normality of your data and you have two related or paired groups (Non-parametric data)
  • Steps:
  1. Calculate the differences between paired observations, rank these differences, and then sum the ranks
  2. Compare the sum of ranks to a critical value from the Wilcoxon distribution, or calculate a p-value
  • Interpretation: If the p-value is less than your chosen significance level, there is a significant difference between the paired groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe and interpret common approaches to testing for normality

A

Testing for normality is an important step in many statistical analyses because many statistical techniques assume that data are normally distributed

The choice of test can depend on the sample size and the purpose of the analysis

**1) Visual Inspection (Qualitative)

  • Histogram: A histogram provides a graphical representation of data distribution. If the data is normally distributed, the shape of the histogram should resemble a bell curve
  • Q-Q Plot (Quantile-Quantile Plot): In this plot, the quantiles of your data’s distribution are plotted against the quantiles of a normal distribution. If the data is normally distributed, the points should approximately lie on the line

2) Statistical Tests (Quantitative)

  • Shapiro-Wilk Test: The null hypothesis is that the population is normally distributed. If the p-value is less than the chosen alpha level (commonly 0.05), then the null hypothesis is rejected and there is evidence that the data tested are not normally distributed
  • Kolmogorov-Smirnov Test: This test compares the cumulative distribution function for a variable with a specified distribution (which may be normal) if the p-value is less than the chosen alpha level, the null hypothesis that the data are drawn from the specified distribution is rejected
  • Anderson-Darling Test: The null hypothesis is that the data are drawn from a specified distribution (which may be normal). If the statistic is greater than critical values, reject the null hypothesis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe the rationale for correcting for multiple comparisons and identify the most appropriate post-test to perform specific combinations of pairwise comparisons

A

Multiple comparisons correction is necessary because each statistical test you perform has a chance of producing a false positive (Type I error), which is when you incorrectly reject the null hypothesis

The more tests you perform, the higher the probability of obtaining at least one false positive

Appropriate Post-tests for Pairwise Comparisons:

After running an ANOVA (Analysis of Variance), which tests the overall difference among groups, you may want to conduct pairwise comparisons to understand which specific groups differ

1) Tukey’s HSD (Honestly Significant Difference): This test controls the family-wise error rate and is best used when comparing all possible pairs of groups

2) Bonferroni Correction: This test adjusts the significance level (alpha) by dividing it by the number of comparisons being made. It is a very strict method and is best used when running a selected number of multiple comparisons

3) Sidak Correction: This is an adjustment method that is slightly less conservative than Bonferroni and more accurate when dealing with a large number of comparisons

4) Holm’s Method: This method sequentially applies the Bonferroni correction, giving more power than the standard Bonferroni method

5) Fisher’s LSD (Least Significant Difference): This method has more power than others, but at the expense of potentially more Type I errors. It is best used when you have a priori comparisons and when it is more important to avoid Type II errors (missing a true effect) than Type I errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Interpret and apply the results of a one-way ANOVA followed by post-test

A

The one-way Analysis of Variance (ANOVA) is a statistical method used to test the hypothesis that the means among three or more groups are equal, given a certain level of significance

If the p-value obtained from the one-way ANOVA is less than the predetermined significance level (typically 0.05), this would suggest at least one group mean is significantly different from the others

However, the one-way ANOVA does not tell you which specific groups were significantly different from each other

That’s where the post-hoc tests (like Tukey’s HSD, Bonferroni, Sidak, Holm’s method, Fisher’s LSD) come into play. These tests will help you determine which specific group(s) resulted in the significant difference

Interpreting the Post-hoc Test Results:

  • The output of post-hoc tests typically include a p-value for each pair of group comparisons
  • If the p-value for a specific pair is less than the predetermined significance level, you can conclude there is a statistically significant difference between those two groups

Applying the Results:

Consider a study comparing the effectiveness of three different diets (A, B, and C) on weight loss

Let’s assume the one-way ANOVA resulted in a p-value of 0.01, suggesting a significant difference in mean weight loss across diets

However, to identify where this difference lies, you’d perform a post-hoc test

Suppose you conducted a Tukey HSD post-hoc test, and the results were as follows:

  1. A vs B: p = 0.75
  2. A vs C: p = 0.02
  3. B vs C: p = 0.01

You can interpret these results as follows:

  1. There’s no significant difference in mean weight loss between diets A and B (p > 0.05).
  2. There’s a significant difference in mean weight loss between diets A and C (p < 0.05).
  3. There’s a significant difference in mean weight loss between diets B and C (p < 0.05).

So, in the context of this study, Diet C appears to be significantly more effective in promoting weight loss compared to both Diet A and B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Identify when it is appropriate to analyse data sets using Pearson’s chi-squared test, Fisher’s exact test, or McNemar’s test

A

These three tests are all used for categorical data analysis

1) Pearson’s Chi-Squired Test:

  • This test is used when you want to determine if there’s an association between two categorical variables in a sample
  • The data are usually displayed in a contingency table
  • The test is most accurate when sample sizes are large, and all expected cell frequencies are at least 5
  • Example: Suppose you conducted a survey asking people about their preference for cats or dogs and their living conditions (apartment or house). You want to see if living conditions and pet preferences are independent. A Chi-squared test would be suitable

2) Fisher’s Exact Test:

  • Fisher’s exact test is used when sample sizes are small, and many cells of your contingency table have expected frequencies less than 5
  • Fisher’s Exact Test is often used for 2x2 tables but can be used for larger tables as well
  • Example: You’re studying a rare disease and you’ve collected data on whether 20 patients smoke or not and whether they have the disease or not. Because of the small sample size and potentially low expected cell counts, you might use Fisher’s exact test to determine if smoking and disease are independent

3) McNemar’s Test:

  • This test is used for paired nominal data
  • it’s suitable when you have two related categorical variables and want to see if the proportions of categories differ for the two variables
  • Example: You ask a group of people at the start of the year if they are smokers or non-smokers, then you ask them again at the end of the year after a public health campaign. You want to see if the campaign affected smoking rates. Since the data is paired (you have two measurements for each individual), a McNemar’s test would be suitable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How would you Perform tests to assess the statistical significance of differences within data sets featuring categorical variables and interpret what the results mean (in terms of the conclusions of a specific study)

A

Performing the Test:

Chi-Squared Test: In software like R or SPSS, you’d first enter your data in a contingency table format, where each cell represents the frequency of each category combination. Then you’d select to perform the Chi-Squared Test, which would return a Chi-Squared statistic and a p-value

Fish’s Exact Test: Similar to the Chi-Squared Test, you’d enter your data in a contingency table. If your sample sizes are small, you’d select Fisher’s Exact Test. The test would return a p-value.

McNemar’s Test: For paired categorical data, you’d again enter your data in a contingency table, reflecting the before and after (or paired) responses. Then you’d select to perform McNemar’s Test, which would provide a Chi-Squared statistic and a p-value

Interpreting the Results:

In all these tests, the p-value is used to determine the statistical significance of the observed differences

  • If the p-value is less than the chosen significance level (commonly 0.05), you would reject the null hypothesis and conclude there’s a statistically significant difference or association between your variables
  • If the p-value is greater than the significance level, you would fail to reject the null hypothesis and conclude that you don’t have enough evidence to suggest a significant difference or association

Example:

Let’s say you conducted a study on the effect of a public health campaign on smoking habits in a small community. You collected data from 25 people before and after the campaign on whether they smoke or not (Yes/No). Here, McNemar’s test would be appropriate.

If the test returns a p-value of 0.02, you would conclude that the public health campaign had a statistically significant effect on the smoking habits of the community (given a significance level of 0.05).

However, the test doesn’t tell you the direction of the effect. For that, you would need to look at the difference in the proportion of ‘Yes’ responses before and after the campaign. If the ‘Yes’ proportion decreased, you would conclude that the campaign led to a reduction in smoking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly