Choosing a statistical test Flashcards

1
Q

Purpose of Statistical Testing

A
  • Definition: Statistical tests are used to determine if there are significant differences or relationships between variables in research data.
  • Goal: To evaluate hypotheses and draw conclusions based on data analysis.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Factors to Consider When Choosing a Statistical Test

A
  1. Research Question
    o Identify whether you are testing for differences, relationships, or comparisons.
    o Example: Are you comparing means (differences) or assessing correlation (relationships)?
  2. Level of Measurement
    o Nominal: Categories without a specific order (e.g., gender, eye color).
    o Ordinal: Categories with a specific order but not equidistant (e.g., rankings).
    o Interval: Numeric scales with equal intervals but no true zero (e.g., temperature in Celsius).
    o Ratio: Numeric scales with equal intervals and a true zero (e.g., weight, height).
  3. Number of Groups
    o Determine if you are comparing one group, two groups, or more than two groups.
    o Example: Are you comparing two groups (e.g., experimental vs. control) or multiple groups?
  4. Distribution of Data
    o Assess whether data is normally distributed (use parametric tests) or not (use non-parametric tests).
    o Tools: Shapiro-Wilk test, histograms, Q-Q plots.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Common Statistical Tests

A
  1. Parametric Tests (assumes normal distribution)
    o Independent Samples t-test: Compares means of two independent groups.
     Example: Comparing test scores of males and females.
    o Paired Samples t-test: Compares means of the same group at different times.
     Example: Pre-test and post-test scores of a group.
    o ANOVA (Analysis of Variance): Compares means of three or more groups.
     Example: Comparing the effectiveness of three different teaching methods.
  2. Non-Parametric Tests (does not assume normal distribution)
    o Mann-Whitney U test: Compares differences between two independent groups.
     Example: Comparing ranks of two different treatments.
    o Wilcoxon Signed-Rank test: Compares two related samples.
     Example: Assessing changes in scores before and after an intervention.
    o Kruskal-Wallis test: Compares three or more independent groups.
     Example: Evaluating the satisfaction ratings from multiple locations.
  3. Correlation Tests
    o Pearson’s correlation: Assesses the strength and direction of the relationship between two continuous variables.
     Example: Relationship between study hours and exam scores.
    o Spearman’s rank correlation: Assesses the strength and direction of the relationship between two ranked variables.
     Example: Relationship between ranked preferences for different products.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Reporting Statistical Results

A
  • APA Format: Report test statistics, degrees of freedom, p-values, and effect sizes.
    o Example: “An independent samples t-test was conducted to compare the test scores of males (M = 85, SD = 10) and females (M = 90, SD = 12). The results showed a significant difference, t(38) = -2.45, p = .02.”
  • Interpretation: Clearly interpret what the results mean in the context of the research question.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Limitations of Statistical Tests

A
  • Assumptions: Many statistical tests come with assumptions (e.g., normality, homogeneity of variance) that, if violated, can lead to inaccurate results.
  • Over-Reliance on p-values: Solely relying on p-values for significance can lead to misinterpretation. Consider effect sizes and confidence intervals for a more nuanced understanding.
  • Data Quality: The reliability of the test results is contingent on the quality and appropriateness of the data collected.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly